29 Oct 2018

[Blog] Transcript: Making sense out of the fragmented OTT delivery landscape.

Hello Everyone, Last month was IBC2018 and they've finally released the presentations. We thought it would be nice to add our CTO Jaron's presentation as he explains how to make sense of the fragmented OTT landscape. You can find the full presentation,...

Hello Everyone,

Last month was IBC2018 and they've finally released the presentations. We thought it would be nice to add our CTO Jaron's presentation as he explains how to make sense of the fragmented OTT landscape.

You can find the full presentation, slides and a transcript below.




Alright, hello everyone. Well as Ian just introduced I'm going to talk about the fragmented OTT delivery landscape.


Because, well, it is really fragmented. There are several types of streaming.

To begin we've got real time streaming, the streaming that everyone knows and loves, so to speak.

And we have pseudo streaming, which is like if you have an HTTP server and you pretend a file is there, but it's not really a file it's actually a live stream and you're just sending it pretending there’s a file.

But that wasn't enough, of course! Segmented streaming came afterwards - which is the current popular method, where you segment a stream in several parts and you have an index and then that index just updates with new parts as they become available.

Now it would be nice if it was this simple and it was just three methods, but unfortunately it is a little bit more complicated.


All of these methods have several protocols they can use to deliver. There's different protocols for real-time, pseudo and segmented and of course none of these are compatible with each other.

There are also players, besides all this. There is a ton of them, just in this hall alone there's at least seven or eight and they all say they do the same thing; and they do, and they all work fine. But how do you know what to pick? It's hard.


We should really stop making new standards.


So, real time streaming was the first one i mentioned. There's RTMP, which is the well known protocol that many systems still use as their ingest. But it's not used very often in delivery, as Flash is no longer supported in browsers. It's a very outdated protocol, it doesn't support HEVC, AV1, Opus; all the newer codecs aren't in there. But the protocol itself is highly supported by encoders and streaming services. It's something that’s hard to get rid of.

Then there's RTSP with RTP at the core, which is an actual standard unlike RTMP which is something Adobe just invented. RTSP supports a lot of things and it's very versatile. You can transport almost anything through it, but it's getting old. There's an RTSP version 2, which no one supports, but it exists. Version one is well supported but only in certain markets, like IP cameras. Most other things not as much and in browsers you can forget about it.

And then there's something newer for real-time streaming, which is WebRTC. WebRTC is the new cool kid on the block and it uses SRTP internally which is RTP with security added. Internally it's basically the same thing, but this works on browsers, which is nice as that means you can actually use it for most consumers unlike RTSP.

That gives you a bit of an overview of real time streaming. Besides these protocols you can also pick between TCP and UDP. TCP is what most internet connections use. It's a reliable method to send things, but because of it being reliable it's a bit slower and the latency is a bit higher. UDP is unreliable but has very low latency. Depending on what you're trying to do you might want to use one or the other.

All of these protocols work with TCP and/or UDP. RTMP is always TCP, RTSP can be either and WebRTC is always UDP. The spec of WebRTC says it can also use TCP, but I don't know a single browser that supports it so it's kind-of a moot point.


Then there's pseudo streaming which as i mentioned before uses fake files that are playing while they're downloading. They're infinite in length and duration, so you can't actually download them. Well, you can, but you end up with a huge file and you don't know exactly where the beginning and the end is, so it's not very nice.

While pseudo streaming is a pretty good idea in theory, there are some downsides. Besides disagreement on what format to pseudo-stream in, because there's lots of formats - like FLV, MP4, OGG, etcetera - and they all sort of work but none perfectly. The biggest downside is that you cannot cache these as they're infinite in length. So a proxy server will not store them and you cannot send them to a CDN, plus how do you decide where the beginning and end are? So pseudo streaming is nice method, but it doesn't work on a scalable system very well.


Now segmented streaming kind of solves that problem, because when you cut the stream into little files and have an index to say where those files are you can upload those little files to your CDN or cache them in a caching server and these files will not change. You just add more and remove others and the system works.

There are some disagreements here too between the different formats. Like what do we use to index? HLS uses text, but DASH uses XML. They contain the same information, but differ in the way of writing it. The container format for storing the segments themselves is also not clear: HLS uses TS and DASH uses MP4. Though they are kind of standardizing now to fMP4, but let's not go too deep here. The best practices and allowed combinations of what codecs work and which ones do not, do you align the keyframes or not - all of that differs between protocols as well. It’s hard to reach an agreement there too.

The biggest problem in segmented streaming is the high latency. Because many players want to buffer a couple of segments before playing, which means if your segments are several seconds long you will have a minimal latency of several times a couple of seconds. Which is not real-time in my understanding of the word “real-time”.

The compatibility with players and devices is also hard to follow. HLS works fine in iOS, but DASH does not unless it's fMP4 and you put a different index in front of it and it'll then only play on newer iOS models. It's hard to keep track of what will play where, that's also a kind of fragmentation you will need to solve for all of this.


So I kind of lied during the introduction when I had this slide up, there's even more fragmentation other than just these 3 types and their subtypes.

There's also encrypted streaming. When it comes to encrypted streaming there's Fairplay, Playready, Widevine and CENC which tries to combine them a little bit. But even in that they don't agree on what encryption scheme to use. So encryption is even fragmented into two different levels of fragmentation.

Then there are reliable transports now, which are getting some popularity. These are intended for between servers, because you generally don't do this to the end consumer. There are several options here too: some of these are companies/protocols that have been around for a while, some are relatively new, some are still in development, some are being standardized and some are not. That's also a type of fragmentation you may have to deal with if you do OTT streaming.


When it comes to encrypted streaming there is the common encryption standard, CENC. Common encryption, that is what it stands for, but it's not really common because it only standardizes the format and how to transport it. It standardizes on fMP4, it standardizes on where the encryption keys are, etc. But not what type of encryption to use. All encryption types use a block cipher, but some are counter based and others are not. So depending on what type of DRM you're using you might have to use one or the other. It's not really standardized, yet it is, so it's confusing on that level as well.


Then the reliable transports, they are intended for server to server. All of them use these techniques in some combination. Some add a little bit of extra fluff or remove some of it. But they all use these techniques at the core.

Forward error correction sends extra data with the stream that allows you to calculate the contents of data that is not arriving. This means not wasting any time asking for retransmits, since you can just recalculate what was missing so you don't have to ask the other server and have another round-trip in between.

Retransmits are sort of self-explanatory, where the receiving end says "hey i didn't receive package X can you retransmit it, send me another copy". This wastes time but eventually you do always get all the data so you can resolve the stream properly.

Bonding is something on a different level altogether where you connect multiple network interfaces like wireless network and GSM and you send data over both, hoping that with the combination of everything it will all end up arriving.

If you combine all three techniques of course you will get really good reception, at the cost of lots of overhead.

There's no standardization at all yet on reliable transports. It's very unclear what the advantages and disadvantages are of all these available ones. The ones listed in the previous slide all claim to be the best, to be perfect and to use a combination of the techniques. There's no real guide as to which you should be using.

So... lots of fragmentation in OTT.


So what do you do to fix all that fragmentation? Now this is where my marketing kicks in.

Right there is our booth, we are DDVTech, we make MistServer and it's a technology you can use to build your own systems on top of. We give you the engine you use underneath your own system and we help you solve all of these problems so you can focus on what makes your business unique and not have to worry about standardization, what to implement and what the next hot thing tomorrow is going to be.

We also allow you to auto-select protocols based on the stream contents and device you're trying to play on or what the network conditions are. Basically everything you need to be successful when you're building an OTT platform.


That’s the end of my presentation, if you have any questions you can drop by our booth or shoot us an email on our info address and we’ll help you out and get talking.

read moreless
13 Sep 2018

[News] Now at IBC2018

Hello everyone! Like last year we are exhibiting at IBC2018 this year. If you happen to attend IBC please don't forget to drop by and say hi - we'd be more than happy to meet you! We'll be available at booth C10...

Hello everyone!

Like last year we are exhibiting at IBC2018 this year. If you happen to attend IBC please don't forget to drop by and say hi - we'd be more than happy to meet you!

We'll be available at booth C10 in Hall 14. Though if you want to talk to a specific person it is a good idea to contact us to schedule a meeting.

I'm also sorry to say that since most of our team will be attending the IBC, some of the more difficult e-mail support questions we receive might be answered a bit slower.

read moreless
23 Aug 2018

[Blog] How to build a Twitch-alike service with MistServer

Hey all! First of all, our apologies for the lack of blog posts recently - we've been very busy with getting the new 2.14 release out to everyone. Expect more posts here so that we can catch back up to...

Hey all! First of all, our apologies for the lack of blog posts recently - we've been very busy with getting the new 2.14 release out to everyone. Expect more posts here so that we can catch back up to our regular posting pace!

Anyway - hi 👋! This is Jaron, not with a technical background article (next time, I promise!) but with a how-to on how you can build your own social streaming service (like Twitch or YouTube live) using MistServer. We have more and more customers running these kind of implementations lately, and I figured it would be a good idea to outline the steps needed for a functional integration for future users.

A social streaming service, usually has several common components:

  • A login system with users
  • The ability to push (usually RTMP) to an "origin" server (e.g. sending your stream to the service)
  • A link between those incoming pushes and the login system (so the service knows which stream belongs to which user)
  • A check to see if a viewer is allowed to watch a specific stream (e.g. paid streams, password-protected streams, age restricted stream, etc)
  • The ability to record streams and play them back later as Video on Demand

Now, MistServer can't help you with the login system - but you probably don't want it to, either. You'll likely already have a login system in place and want to keep that and its existing database. It's not MistServer's job to keep track of your users anyway. The Unix philosophy is to do one thing and do it well, and Mist does streaming; nothing else.

How to support infinite streams without configuring them all

When you're running a social streaming service, you need to support effectively infinite streams. MistServer allows you to configure streams over the API, but that is not ideal: Mist start to slow down after a few hundred streams are configured, and the configuration becomes a mess of old streams.

Luckily, MistServer has a feature that allows you to configure once, and use that stream config infinite times at once: wildcard streams. There's no need to do anything special to activate wildcard mode: all live streams automatically have it enabled. It works by placing a plus symbol (+) behind the stream name, followed by any unique text identifier. For example, if you configured a stream called "test" you could broadcast to the stream "test", but also to "test+1" and "test+2" and "test+foobar". All of them will use the configuration of "test", but use separate buffers and have separate on/off states and can be requested as if they are fully separate streams.

So, a sensible way to set things up is to use for example the name "streams" as stream name, and then put a plus symbol and the username behind it to create the infinite separate streams. For example, user "John" could have the stream "streams+John".

Receiving RTMP streams in the commonly accepted format for social streaming

Usually, social streaming uses RTMP URLs following a format similar to rtmp:// However, MistServer uses the slightly different format It's inconvenient for users to have to comply with Mist's native RTMP URL format, so it makes sense to tweak the config so they are able to use a more common format instead.

The ideal method for this is using the RTMP_PUSH_REWRITE trigger. This trigger will call an executable/script or retrieve an URL, with as payload the RTMP URL and the IP address of the user attempting to push, before MistServer does any parsing on it whatsoever. Whatever your script or URL returns back to MistServer is then parsed by MistServer as-if it was the actual RTMP URL, and processing continues as normal afterwards. Blanking the returned URL results in the push attempt being rejected and disconnected. Check MistServer's manual (under "Integration", subchapter "Triggers") for the documentation of this trigger.

An example in PHP could look like this:

//Retrieve the data from Mist
$payload = file_get_contents('php://input');
//Split payload into lines
$lines = explode("\n", $payload);
//Now $lines[0] contains the URL, $lines[1] contains the IP address.

//This function is something you would implement to make this trigger script "work"
$user = parseUser($lines[0], $lines[1]);
if ($user != ""){
  echo "rtmp://".$user;
  echo ""; //Empty response, to disconnect the user
//Take care not to print anything else after the response, not even any newlines! MistServer expects a single line as response and nothing more.

The idea is that the parseUser function looks up the stream key from the RTMP URL in a database of users, and returns the username attached to that stream key. The script then returns the new RTMP URL as rtmp://, effectively allowing the push as well as directing it to a unique stream for the authorized user. Problem solved!

How to know when a user starts/stops broadcasting

This one is pretty easy with triggers as well: the STREAM_BUFFER trigger is ideal for this purpose. The STREAM_BUFFER trigger will go off every time the buffer changes state, meaning that it goes off whenever it fills, empties, goes into "unstable" mode or "stable" mode. Effectively, MistServer will let you know when the stream goes online and offline, but also when the stream settings aren't ideal for the user's connection and when they go back to being good again. All in real-time! Simply set up the trigger and store the user's stream status into your own local database to keep track. Check MistServer's manual (under "Integration", subchapter "Triggers") for the documentation of this trigger.

Access control

Now, you may not want every stream accessible for every user. Limiting this access in any way, is a concept usually referred to as "access control". My colleague Carina already wrote an excellent blog post on this subject last year, and I suggest you give it a read for more on how to set up access control with MistServer.

Recording streams and playing them back later

The last piece of the puzzle: recording and Video on Demand. To record streams, you can use our push functionality. This sounds a little out of place, until you wrap your head around the idea that MistServer considers recording to be a "push to file". A neat little trick is that configuring an automatic push for the stream "stream+" will automatically activate this push for every single wildcard instance of the stream "stream"! Combined with our support for text replacements (detailed in the manual in the chapter 'Target URLs and settings'), you can have this automatically record to file. For example, a nice target URL could be: /mnt/recordings/$wildcard/$datetime.mkv. That URL will sort recordings into folders per username and name the files after the date and time the stream started. This example records in Matroska (MKV) format (more on that format in my next blog post, by the way!), but you could also record in FLV or TS format simply by changing the extension.

If you want to know when a recording has finished, how long it is, and what filename it has... you guessed it, we have a trigger for that purpose too. Specifically, RECORDING_END. This trigger fires off whenever a recording finishes writing to file, and the payload contains all relevant details on the new recording. As with the previous triggers, the manual, under "Integration", subchapter "Triggers", has all relevant details.

There is nothing special you need to do to make the recordings playable through MistServer as well - they can simply be set up like any other VoD stream. But ideally, you'll want to use something similar to what was described in another of our blogposts last year, on how to efficiently access a large media library. Give it a read here, if you're interested.

In conclusion

Hopefully that was all you needed to get started with using MistServer for social streaming! As always, contact us if you have any questions or feedback!

read moreless
22 Aug 2018

[Release] Stable release 2.14 now available!

Hello everyone! Stable release 2.14 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers with active licenses will receive a new build notification in their e-mail automatically. Here are some highlights: Full MKV/WebM support...

Hello everyone! Stable release 2.14 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers with active licenses will receive a new build notification in their e-mail automatically.

Here are some highlights:

  • Full MKV/WebM support for both input and output
  • (Pro) MKV recording support
  • Websocket versions of many data feeds, to improve on polling-based techniques
  • (Websocket-based) JSON metadata track input and output
  • RTSP, RTMP and JSON timestamps are automatically synced together when send to the same stream
  • (Pro) Added USER_END trigger which triggers when an access log entry is written
  • Significant improvements to RTSP and HTTP(S) handling (mostly relevant for Pro edition)
  • More precise track selection mechanic
  • Greatly improved overall system stability
  • Many other small fixes/improvements/etc. See changelog for full list!
read moreless
5 Jun 2018

[Blog] Generating a live test stream from a server using command line

Hello everyone! Today I wanted to talk about testing live streams. As you will probably have guessed: in order to truly test a live stream you'll need to be able to give it a live input. In some cases that...

Hello everyone! Today I wanted to talk about testing live streams. As you will probably have guessed: in order to truly test a live stream you'll need to be able to give it a live input. In some cases that might be a bit of a challenge, especially if you only have shell access and no live input available. It's for those situations that we've got a script that uses ffmpeg to generate a live feed which we call videogen. The script itself is made for Linux servers, but you could take the ffmpeg command line and use it for any server able to run ffmpeg.

What is videogen

Videogen is a simple generated live stream for testing live input/playback without the need for an actual live source somewhere. It is built on some of the examples available at the ffmpeg wiki site. It looks like this:

picture of videogen in action


As you might've suspected, in order to use videogen you'll need ffmpeg. Make sure to have it installed or have the binaries available in order to run videogen.

Installing videogen

Place the videogen file in your /usr/local/bin directory, or make your own videogen by pasting this code in a file and making it executable:


 ffmpeg -re -f lavfi -i "aevalsrc=if(eq(floor(t)\,ld(2))\,st(0\,random(4)*3000+1000))\;st(2\,floor(t)+1)\;st(1\,mod(t\,1))\;(0.6*sin(1*ld(0)*ld(1))+0.4*sin(2*ld(0)*ld(1)))*exp(-4*ld(1)) [out1]; testsrc=s=800x600,drawtext=borderw=5:fontcolor=white:fontsize=30:text='%{localtime}/%{pts\:hms}':x=\(w-text_w\)/2:y=\(h-text_h-line_h\)/2 [out0]" \
 -acodec aac -vcodec h264 -strict -2 -pix_fmt yuv420p -profile:v baseline -level 3.0 \

Using Videogen

Videogen is rather easy to use, but it does require some manual input as you need to specify the output, but you can specify any of the codecs inside as well incase you want/need to use something else than our default settings.

The only required manual input is the type of output you want and the output URL (or file). For MistServer your output options are:




 videogen -f rtsp rtsp://ADDRESS:PORT/STREAM_NAME

TS Unicast

 videogen -f mpegts udp://ADDRESS:PORT

TS Multicast

 videogen -f mpegts udp://MULTICASTADDRESS:PORT

As it's all run locally it doesn't really matter which protocol you'll be using except for one point. RTMP cannot handle multi bitrate using this method, so if you want to create a multi bitrate videogen you'll usually want to use TS.

Additional parameters

You'll have access to any of the additional parameters that ffmpeg provides for both video and audio encoding simply by just adding them after the videogen command. Ffmpeg handles the last given parameters if they overwrite previously given parameters. For all the ffmpeg parameters we recommend checking the ffmpeg documentation for codecs, video and audio.

Some of the parameters we tend to use more often are:


This determines when keyframes show up. This sets the amount of frames to pass before inserting a keyframe. When set to 25 you'll get one keyframe per second, as videogen runs at 25fps.


This changes the resolution. The default of videogen is 800x600, so setting this to 1920x1080 will make it a "HD" stream, though the quality is barely noticeable with this script. We tend to use screen resolutions to verify a track is working correctly.

-c:v hevc or -c:v h264

This changes the video codec. The default is h264 baseline profile of 3.0, which should be compatible with any modern device. Changing the codec to H265 (HEVC) or "default" h264 changes things and might be exactly what you want to find out. Do note that HEVC cannot work over RTMP, use RTSP or TS instead!

-c:a mp3 -ar 44100

This changes the audio codec. The default is aac, so knowing how to set mp3 instead can be handy. Just be sure to add an audio rate as MP3 tends to bug out when it's not set. We tend to use 44100 as most devices will work with this audio rate.

Multibitrate videogen

Obviously you would want to try out a multi bitrate videogen as well, which you can do but will want to use TS for instead of RTMP as RTMP cannot handle multi bitrate through a single stream as push input.

You can find our multi bitrate videogen here.

You can also make an executable file with the following command in it:


 #multibitrate videogen stuff if you want to edit qualities or codecs edit the parameters per track profile. If you want to add qualities just be sure to map it first (as audio or video depending on what kind of track you want to add). Videotracks will generally need the -pix_fmt yuv420p in order to work with this script.

 exec ffmpeg -hide_banner -re -f lavfi -i "aevalsrc=if(eq(floor(t)\,ld(2))\,st(0\,random(4)*3000+1000))\;st(2\,floor(t)+1)\;st(1\,mod(t\,1))\;(0.6*sin(1*ld(0)*ld(1))+0.4*sin(2*ld(0)*ld(1)))*exp(-4*ld(1)) [out1]; testsrc=s=800x600,drawtext=borderw=5:fontcolor=white:fontsize=30:text='%{localtime}/%{pts\:hms}':x=\(w-text_w\)/2:y=\(h-text_h-line_h\)/2 [out0]" \
 -map a:0 -c:a:0 aac -strict -2 \
 -map a:0 -c:a:1 mp3 -ar:a:1 44100 -ac:a:1 1 \
 -map v:0 -c:v:0 h264 -pix_fmt yuv420p -profile:v:0 baseline -level 3.0 -s:v:0 800x600 -g:v:0 25 \
 -map v:0 -c:v:1 h264 -pix_fmt yuv420p -profile:v:1 baseline -level 3.0 -s:v:1 1920x1080 -g:v:1 25  \
 -map v:0 -c:v:2 hevc -pix_fmt yuv420p -s:v:2 1920x1080 -g:v:2 25 \

This will create a multi bitrate video stream with aac and mp3 audio and a 800x600, 1920x1080 h264 video stream and a single 1920x1080 h265 (HEVC) stream. That should cover "most" multi bitrate needs.

You will always want to combine this with the ts output for ffmpeg, so using it will come down to:

 multivideogen -f mpegts udp://ADDRESS:PORT

Using videogen or multivideogen with MistServer directly

Of course you can also use videogen or multivideogen without a console, you will still have to put the scripts on your server (preferably the /usr/local/bin folder) however.

To use them together with MistServer just use ts-exec and the mpegts output of ffmpeg like this:

MistServer source:

 ts-exec:videogen -f mpegts -
 ts-exec:multivideogen -f mpegts -

Example of how to fill in MistServer in the Interface

You can put the streams on always on to have a continuous live stream or leave them on default settings and only start the live stream when you need it. Keep in mind that as long as they're active they will use CPU.

read moreless
Latest 2018 2017 2016 2015 2014 2013 2012