News

20 Dec 2017

[Blog] Live streaming with Wirecast and MistServer

Hey everyone! As Jaron mentioned I would do the next blog post. Since our blog post about OBS Studio and MistServer is quite popular I figured I would start adding other pushing applications, this time I'll talk about Telestream Wirecast. Wirecast Wirecast...

Hey everyone! As Jaron mentioned I would do the next blog post. Since our blog post about OBS Studio and MistServer is quite popular I figured I would start adding other pushing applications, this time I'll talk about Telestream Wirecast.

Wirecast

Wirecast is an application meant for live streaming, their main focus is to easily allow you to create a live stream with a professional look and feel. It's a great piece of software if you want to go for a professional feel and want a piece of software that makes it easy to do so.

Basic RTMP information

This information will be very familiar to those who read how to push with OBS Studio to MistServer, so feel free to skip it.

Most popular consumer streaming applications use RTMP to send data towards their broadcast target. The most confusing part for newer users is where to put which address, mostly because the same syntax is used for both publishing and broadcasting.

Standard RTMP url syntax

rtmp://HOST:PORT/APPLICATION/STREAM_NAME

Where:

  • HOST: The IP address or host name of the server you are trying to reach
  • PORT: The port to be used; if left out it will use the default 1935 port.
  • APPLICATION: This is used to define which module should be used when connecting, within MistServer, this value will be ignored or used as password protection. The value must be provided, but may be empty.
  • STREAM_NAME: The stream name of the stream: used to match stream data to a stream id or name.

This might still be somewhat confusing, but hopefully it will become clear as you read this post. These will be the settings I will be using in the examples below.

  • Address of server running Wirecast: 192.168.137.37
  • Address of server running MistServer: 192.168.137.64
  • Port: Default 1935 used
  • Application: not used for mistserver, we use live to prevent unreadable URLs.
  • Stream name: livestream

Set up the stream in MistServer

Setting up the stream in MistServer is easy, just go to the stream window and add a stream. For the stream name just pick anything you like, but remember it you will need it in Wirecast later on. For the source select push://ADDRESS_OF_SERVER_RUNNING_WIRECAST. In this example I will go with:

  • Stream name: livestream
  • Source: push://192.168.137.37

Settings used in MistServer for this blog

Booting Wirecast

First you will enter the boot screen, here you can pick your previously saved templates or start with a new one. We will just start with a new one, so just click continue in the bottom right corner.

Image of the Wirecast start screen

And you should see the start interface.

Image of the Wirecast start interface

Setting up Sources

Luckily Wirecast is quite easy to set up. You add sources to your layers, sources could both be audio, video or both at the same time. For this example I'll just add a simple media file, but you could add multiple sources to multiple layers and switch between presets. It's one of the reasons to use Wirecast so I would recommend checking out all the possibilities once you've got the chance.

Adding a stream to Wirecast

Setting up the output

Setting up the output can be done through the output settings menu in the top left.

Choosing output settings in the Outputs menu

Choose a custom RTMP server when setting everything up. Most important are the Address and Stream. You will need to fill in the address of MistServer and the stream you have set up previously. We will go with the following:

  • Address: rtmp://192.168.137.64/live/
  • Stream: livestream

setting up the encoder

now Wirecast has a lot of presets, but they're all a bit heavy to my tastes. If you just want to be done fast I would recommend the 720p x264 2Mbps profile as it's the closest to what you'll need if you're a starting user and unsure what you will need. If you do know what you need or want feel free to ignore this bit. Just be aware that Wirecast tends to set not that many key frames which can drastically change latency.

If you want to tweak the settings a bit I recommend the following settings:

  • encoder: xh264
  • width: 1280
  • width: 720
  • frames per second: 30
  • average bitrate: 1200
  • quality: 3 (very fast encoding)
  • x264 command line option:--bframes 0
  • profile: high
  • keyframe every: (30 -)150

The rest of the settings on default.

Recommended encoder settings for normal streaming in Wirecast

This profile should work for most streams you will want to send over a normal internet connection without being in the way of other internet traffic.

*Edit 2021-06-24: We also recommend setting the --bframe 0 setting in the x264 command line option. Bframes can reduce the bandwidth for live streams, but some protocols have more issues with bframes in live streams which can cause weird stutters. To avoid this simply turn the bframes off.

Setting the layers to live

By pressing the go button your current stream will transition towards your preview to the left following the rules to the left of the button. Only if it's on the live preview will it be pushed out towards your chosen server, so be sure that you're happy with your live preview.

Push the streams to the live preview so they will be pushed when you start broadcasting

Push towards MistServer

You push by pressing the broadcast button, it's to the top left and looks a bit like a wifi button. Alternatively you could click output and select start / stop broadcasting. If it lights up green you're pushing towards MistServer and it should become available within moments, if not you will have to go through your settings as you might have made a typo.

Start your stream push by pressing broadcast

Check if it is working in MistServer

To check if it is working in MistServer all you will have to do is press the preview button when at the stream menu. If everything is setup correctly you will see your stream appear here. If you would like to stop your live stream just stop the broadcast in Wireshark by pressing the broadcast button or the start / stop broadcasting option.

Check the live preview to see if the push is coming into MistServer

Getting the stream to your viewers

As always MistServer can help you with this. At the Embed panel (found at the streams panel or stream configurations) you can find some embed-able code or the direct stream URLs. Use the embed-able code to embed our player or use the stream codes for your own preferred player solution and you should be all set! Happy streaming!

The embed information for this stream preview

The next blog post will kick off the 🎆new year🎆 and be made by Carina. She will write about how to combine MistServer with your webserver.

Edit 2021-06-24: Added x264 recommendation

read moreless
15 Dec 2017

[Blog] MistServer's internals in detail

Hey everyone 👋! It's Jaron again, here to explain the internals of MistServer some more 🔧⚙. Last time I explained DTSC, our internal media format, and this time I will talk about how MistServer is split up over multiple executables and...

Hey everyone 👋! It's Jaron again, here to explain the internals of MistServer some more 🔧⚙. Last time I explained DTSC, our internal media format, and this time I will talk about how MistServer is split up over multiple executables and how exactly DTSC (and other means) are used to communicate between these parts. Also, bonus: what you can do by manually running the various MistServer executables.

Extremely modular

When we first designed MistServer, one of our main goals was an extreme crash-resistance and general resilience against any type of problem or attack. To accomplish this, each "active" (active here meaning that it is either being received or sent) stream is maintained by a single "input" process (think of an input as an origin or source for the data). Each connection is maintained by a single "output" process (think of these as sinks for the data). Finally, there is the "controller" process that monitors and controls all of the above, and provides the MistServer API and single point of control.

This very fragmented setup, where each task is handled by not just a separate thread but a complete separate process, was chosen because a crash in a single thread can still affect the other threads in that same process. However, a crash in a process will almost never affect the stability of another process.

Processes and threads are almost the same thing in Linux anyway, and the kernel is very good about re-using static memory allocations, so the (extra) overhead is negligible. There is a little bit of extra overhead making this a bad design for a generic network-based server, but with media specifically the amount of data tends to be large enough that the extra stability is most definitely worth the slightly higher resource usage per connection. In most cases the bandwidth is saturated before the other resources are used up, either way.

The flow of control is rather loosely defined on purpose: "inputs" are started as-needed, directly by the "output" that wants to receive media data. The outputs are spawned from listening processes that wait for connections on sockets, and these listeners are started by the controller. Each output as well as live inputs report statistics and health data to the controller.

The media data itself is made available in DTSC format through shared memory pages, which can freely be read by all processes of the same system user. Metadata on the media data is provided through a custom binary structure, which is written to only by the input and read from by the outputs. This structure is locked only while being written to (roughly once per second), and the many readers do not have to lock the structure to read simultaneously. A similar method is used to report back to the controller.

On a side note: we're actually working on making this entire process fully lock-free, through a special "Reliable Access" shared memory structure we've devised. This structure accomplishes simultaneous writes and reads safely, without needing to lock anything. We're hoping to release this significant system-wide speed boost in early 2018. More on this in a future blog post!

Error recovery

Now, this becomes especially interesting when any of MistServer's processes crash or fail in some other way. Since MistServer was written with dependability in mind, you may never have experienced this. So let me walk you through what happens if something goes wrong:

Should any of the outputs crash or fail, the single connection it was maintaining will be severed. Nobody else will be affected, and the controller knows that and when the process has crashed because it will stop to report back at regular intervals. If the process is frozen or stuck in some kind of loop, the controller will forcibly kill the process to ensure the stability of the rest of the system. There is no data to clean up, since all data used by outputs is in shared structures maintained by the corresponding input.

Should any of the inputs crash or fail, each input has a dedicated "angel process" watching over it that will take notice. Since inputs maintain the shared memory structure, there is a potential to leak a lot of memory should these processes suddenly disappear without doing proper cleanup. The angel process will clean up all memory left behind by the input, and then re-start the input. The outputs never even notice the input has stopped and restarted, and will just take slightly longer to load while the input is recovered. No connections are severed at all in this case (unless the stream in question was a live stream; since the timing information is lost during cleanup).

Should the controller itself crash, this too has an angel process watching over it for the same reason that the inputs do. The controller maintains several structures that contain state information, as well as the structures that the inputs and outputs use to report back. These structures are all known beforehand, which lets us do a neat trick: instead of cleaning up the structures, the newly started replacement controller loads its state information from the existing structures. This allows the controller to literally pick up where the previous one left off, without any of the inputs or outputs even noticing what happened.

Especially cool is that the above behaviour also allows for rolling updates. The MistServer binaries can be replaced by new versions, and the controller told to restart itself. Any new connections will use the newly installed binaries while old connections keep using the old ones. The same holds true for the input processes. Eventually, the whole server will be updated, without ever dropping a single connection in the process.

Playing with the binaries more directly

Because of the modular nature of MistServer, nothing is stopping you from running some inputs or outputs manually alongside what is automatically run. Here are some useful examples:

Making an output write to file. Some of MistServer's outputs are able to write directly to a file (the same formats that we support for recording). For example, you can run MistOutFLV to write any stream to a FLV file as follows: MistOutFLV -s STREAMNAME OUTPUT_FILE.flv. This will write the stream STREAMNAME to the newly created file OUTPUT_FILE.flv. Normally MistServer requires recordings to specify the full output path, but when running the output manually this is not a requirement.

Piping an output into another application. Some of MistServer's outputs are able to write directly to a pipe. For example, you can run MistOutHTTPTS to write any stream to a pipe in TS format as follows: MistOutHTTPTS -s STREAMNAME -. This will write to stream STREAMNAME to stdout. A wide variety of applications is able to process TS over a pipe; so many in fact that we included a special "ts-exec:" output which allows you to do exactly this, with full support for scheduling as well as auto-start when streams become active.

Piping another application into an input. Some of MistServer's inputs are able to read directly from a pipe. For example, you can run MistInTS to read any stream from a pipe in TS format as follows: MistInTS -s STREAMNAME -. This will read stdin into stream STREAMNAME. A wide variety of applications is able to output TS over a pipe; so many in fact that we included a special "ts-exec:" input which allows you to do exactly this, with full support for auto-start and auto-stop as viewers come and go.

Debugging and finding errors/mistakes. Is a specific output or input giving you trouble, but the logs in MistServer's controller are not accurate enough or contain too many unrelated messages from other processes? Just run the relevant input or output manually at a higher debug level, and the messages will print to your console instead of being collected by the controller.

Keeping an input active while testing. In MistServer, some inputs can be set to "Always on" in the configuration, which keeps them permanently on, even without active viewers. Sometimes you want to force this behavior for testing purposes. Running an input manually for an unconfigured stream will trigger this behaviour until manually shut down using a standard kill signal (i.e. Ctrl+C in the terminal).

Checking outputs (MistServer or other sources) for problems. MistServer comes with a collection of "Analysers" which allow you to debug and pretty-print various protocols and file formats. These are especially useful for developers. Explaining these in detail is beyond the scope of this blog post (more on these analysers in a future post!), but you can run them all with the --help commandline parameter to get an overview of supported options and modes.

Querying live stream health. The MistAnalyserDTSC analyser can be used (at least under Linux) to find out more about a live buffer's health in JSON format. To request this data, simply run it as follows: MistAnalyserDTSC -D 1 /dev/shm/MstSTRMstreamname (replace streamname with the name of your stream).

In closing

Keep in mind all of this information is correct and current for the current stable version of MistServer (2.13) as well as most of the older versions. In the near future we'll be updating the internal formats (See "On a side note" above in the section "Extremely modular"), which will make some of this information obsolete. There will be another blog post when that happens!

This post just gave a very high-level overview of Mist's internals and how the various applications connect together to form the complete software package. Particularly the analysers deserve more attention.

I hope you have a better idea of MistServer's inner workings now. Any questions? As always feel free to contact us! See you next time! The next blog post đź’¬ will be by Balder on how to stream using MistServer and Wirecast.

read moreless
30 Nov 2017

[Release] Stable release 2.13 now available!

Hello everyone! Stable release 2.13 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers with active licenses will receive a new build notification in their e-mail automatically. Here are some highlights: (Pro-only) Subtitle support:...

Hello everyone! Stable release 2.13 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers with active licenses will receive a new build notification in their e-mail automatically.

Here are some highlights:

  • (Pro-only) Subtitle support: sideloaded srt input, HLS output, DASH output, MP4 input & output
  • (Pro-only) Significantly improved recording and scheduled pushes system
  • Most core systems now support proxy servers, HTTP Basic/Digest authentication and HTTPS
  • (Pro-only) Triggers and HLS pull input likewise support all of the above now
  • Fixes for Facebook and YouTube support
  • TS-based inputs now support timestamp rollover properly and repeatedly
  • Fixed CVE-2017-16884, discovered by hyp3rlinx / apparitionsec
  • Many other small fixes/improvements/etc. See changelog for full list!
read moreless
20 Nov 2017

[Blog] Scheduled Playout

Hello everyone, Erik here with a quick post about a new feature we're currently working on: simulating a live stream based on one or multiple VoD assets scheduled to play out continuously. This 'simulated live' type of stream is usable...

Hello everyone, Erik here with a quick post about a new feature we're currently working on: simulating a live stream based on one or multiple VoD assets scheduled to play out continuously.

This 'simulated live' type of stream is usable in various scenarios, ranging from a small broadcaster or hobby user who wants to set up a continuous stream consisting of a limited amount of pre-recorded footage, to a large station wanting to maintain full control of what programs to broadcast at which time.

We decided that this would be a nice feature to include in our software, and after our — nearly finished — 2.13 release, we will be working at full speed to incorporate our new style of internal metadata storage in memory, giving way to allowing multiple inputs to contribute to a single stream. This feature will not only allow the development of our scheduled playout system, but will also create the possibilities from merging multiple separate source files into a single multi-bitrate stream, or having multiple different versions of encryption active at the same time.

Playlist creation

The first scenario we will support is single file playout loopback. This will allow for a single file to be looped continuously, while for the viewers it will look like a single live stream. Though not useful for many cases, it will provide an easy setup if you don't change your programming, keeping the stream active at all times.

With some more advanced configuration, we will be supporting m3u files. This file format is what most people know from the HLS format, while at the same time providing a straightforward way to handle a simple list of filenames without providing extra information. Any extended headers available in the file will be ignored, and both .m3u and .m3u8 extensions will be supported.

Using this playlist setup will not only allow you to schedule multiple files in a loop, it will also allow you to edit the playlist while it is running. For every entry in the playlist, after playing out the entry, the playlist will be reloaded before determining what the next file in the playlist is. This allows you to amend your playlist as it runs, replacing one program with another, or even continuously updating your playlist file so it will never loop, but acting as an actual live broadcast channel.

That's it for this small update, if you have any requests or questions about this upcoming feature, please let us know.

Our next post will be by Jaron, with an explanation about how MistServer works internally.

read moreless
2 Nov 2017

[Blog] DTSC: MistServer's internal media format

Hey everyone, this is Jaron again, and it's time for another blog post! This time I'm going to dedicate some time to our internal media format, DTSC. DTSC stands for "DDVTech Stream Container", and is the format MistServer internally converts all inputs...

Hey everyone, this is Jaron again, and it's time for another blog post! This time I'm going to dedicate some time to our internal media format, DTSC.

DTSC stands for "DDVTech Stream Container", and is the format MistServer internally converts all inputs into. The outputs then read this format and convert it to whatever they are supposed to output to the end-user. Doing things this way, allows us to generically write outputs and inputs without needing to know which of each will be available in advance. It's one of the biggest reasons why MistServer is so modular, and why we have been able to add inputs and outputs so regularly.

DTSC itself is a container/transport format taking inspiration from the FLV, JSON, MKV and MP4 formats. It tries to take the good parts of those formats without making anything overly complicated. It's packet-based, with a (repeatable) header that contains initialization and seeking information. All packets are size-prepended, and the data format is based on a simplified form of binary JSON. These properties allow DTSC to be both used as a storage format on disk and as a transport format over networks.

We're planning to release the DTSC specification as well as a sample implementation as public domain in the near future, because we see possibilities in replacing RTMP with DTSC for live ingest purposes in the long term. (On that note: if you are interested in contributing to or discussing the possibilities of the DTSC specification, please contact us!)

Besides the internal use (which only usually exists in RAM and is never written to disk at any point), DTSC is used by MistServer in two more places: our header files (the .dtsh files you may have noticed appearing alongside your VoD assets) and by the DTSC Pull input.

When MistServer's various inputs read a VoD file from storage, they generate a DTSC header and store it beside the original file as a .dtsh file. On future accesses to the file, this header is used to speed up loading from the file. It can safely be deleted (and will regenerate the next time the file is accessed) and will auto-delete if the format changes because of MistServer updates (for example, the upcoming 2.13 update will add a new optional field, and thus force a regenerate of all headers so the new field will show up). This file helps us provide a consistent speed across all media storage formats, and provides an index for files that normally do not have an index, such as TS files.

The DTSC Pull input allows you to pull live streams from other MistServer instances, using DTSC as the transport. This means it is a live replication and distribution format that is compatible with an unlimited number of tracks/qualities and works for all codecs. Unfortunately, MistServer is the only software (at time of writing) that has implemented DTSC as a streaming input/output format, so you can only take advantage of DTSC distribution between MistServer instances (for example, for load balancing live streams). There are plans to also make DTSC usable for VoD distribution in the near future.

Hopefully this article helped shed some light on MistServer's internal processes regarding file formats and replication. Next time, my colleague Erik will write about our upcoming scheduled play-out system!

read moreless
27 Oct 2017

[Blog] Library playback with the STREAM_SOURCE trigger

Hello readers! Today I will talk about how MistServer can be set up to work with a large video library. The issue When you want MistServer to serve a VOD file, you'd normally add a stream, and configure the path to the...

Hello readers! Today I will talk about how MistServer can be set up to work with a large video library.

The issue

When you want MistServer to serve a VOD file, you'd normally add a stream, and configure the path to the file as the stream source.
If you have a folder of several video files, you'd add a folder stream, with its source pointing to the path of the folder. The individual files are then accessible using the main streamname, a +, and then the filename, myfolderstream+video.mp4 for example.
However, these implementations have limits. Having too many configured streams will considerably slow down MistServer. Folder streams are inconvenient for larger libraries, look unprofessional, and are not able to access subfolders.
Once your video library grows beyond a certain point, it's wise to consider a different configuration. You can set up one or more streams with the settings you'd like to use, and then use the STREAM_SOURCE trigger to rewrite the stream source to which file you'd like to stream.

In this blog post I'll discuss an example implementation of this method, using PHP.

Resources:

Understanding the STREAM_SOURCE trigger

I'd like to explain the inner workings of this method by setting up a stream, that, when requested, plays a random video from the library.
The first step is to configure the stream in MistServer, through the Management Interface.
Let's create a new stream, and call it random. The source doesn't really matter, as we'll be overwriting it, but let's set it to one of our video files. That way, if the trigger doesn't work for some reason, that video will be played.

Now, let's go ahead and configure the trigger.
It should trigger on STREAM_SOURCE. Applies to should be set to the stream we just created, random. The handler url should point to where our trigger script will be hosted. We want MistServer to use the page output, so it should be set to blocking. Let's set the default response to the same fallback video we configured as the stream source.

Alright, to the PHP! We'll need to echo the path to the video we want the stream to play.

First, confirm the page is being requested by MistServer's STREAM_SOURCE trigger:

if ($_SERVER["HTTP_X_TRIGGER"] != "STREAM_SOURCE") {
  http_response_code(405);
  error_log("Unsupported trigger type.");
  echo("Unsupported trigger type.");
  return;
}

Next, we want to retrieve the stream name that MistServer reports to the PHP script. This is sent on the first line of the POST body.

//retrieve the post body
$post_body = file_get_contents("php://input");
//convert to an array
$post_body = explode("\n",$post_body);
$stream = $post_body[0];

If the stream name equals random, we'll select a random video id from the library, and return the path. Make sure the path is the only thing that is sent.

if ($stream == "random") {    
  //select a random video from the library array
  $library = get_library();
  $random_video_id = array_rand($library);

  //return the path
  echo $library[$random_video_id];
  return;
}

To simulate a video library, I've set up a little function that indexes video files from a folder, just for this demo. This should be replaced with the library database system when implemented.

That's all. When the stream random is requested, a random video will be shown.
There's a little caveat here, though. If the stream random is already active, a new request will receive the same video. We can prevent this by adding a unique wildcard token to the stream name; that way the stream won't be active yet. In the trigger script, edit the stream name condition:

if (($stream == "random") || (substr($stream,0,7)) == "random+") {

And, on the video page:

$streamname = "random+".rand();

Embed the stream on a page, and every page load a random video will be selected.

Back to the library scenario

Now that we understand how the trigger should be used, let's get back to a more practical use case. There isn't that much of a difference: we want to pass the desired video id to the trigger script, and return the appropriate path. We can simply add the video id to the stream as the wildcard token.
I've also created a new stream, library, and applied the trigger to it.

if ((substr($stream,0,8)) == "library+") {
  $wildcard = substr($stream,8); //strip the "library+" part

  $library = get_library();

  echo $library[$wildcard];
  return;
}

With, on the video page:

$streamname = "library+".intval($_GET["video_id"]);

The URL to our video page could be something user friendly like /Movies/45/Big Buck Bunny Goes To Town/. Simply configure your HTTP server to rewrite the url so that the video id can be extracted.

You can consider using multiple streams (movies, shows, live) for different categories. That way, you can have different stream settings in MistServer to suit your needs.

I hope these examples can help you if you are looking to set up a video library using MistServer. As always, if you have any questions or want to tell us about your awesome project, feel free to contact us.
Next time, Jaron will be back with another behind the scenes blog post.
In the meantime: happy streaming!

read moreless
23 Oct 2017

[Blog] Recording live streams with MistServer

Hey everyone, lately we've had some questions regarding recording and how to set it up. I thought it would be a good subject to talk about as while it's quite easy to use, there are always a few things to...

Hey everyone, lately we've had some questions regarding recording and how to set it up. I thought it would be a good subject to talk about as while it's quite easy to use, there are always a few things to keep in mind.

The basics of recording in MistServer

All recording in MistServer is done through the push panel within your MistServer interface. This is because we see recording as pushing the stream to a local file. To set one up we'll only have to push towards a path/file.ext, I’ll talk more about that below. At the time of writing this blog post MistServer can only record in FLV, TS, MP3 and WAV files, but we will be adding additional protocols in the future, our aim is to have a recording possibility for any protocol that makes sense to record in.

There's two flavors of pushing, push and automatic push.

Normal pushes

Normal pushes are one time events. A push, or recording in this case will be started of your selected stream and after it's done it will be removed and disappear leaving just your recording. These pushes are unaffected by the push settings at the top of the menu as they’re single non-restarting recordings. It's the easiest method to start a recording with if you just want a single recording and be done with it.

Automatic pushes

Automatic pushes are set to monitor a stream and start recording the moment that stream becomes active. This could be a live stream or the wildcard streams of a live stream, any recording set up using this method will be recorded under the settings used. It's the method to choose if you want to setup something that automatically records live streams.

Push settings: stream name, target and target options

Stream name

Stream name is the stream you'll be recording with this push command. There’s 3 options here: streamname, streamname+ and streamname+wildcard. Streamname will just use the stream with the matching stream name, streamname+ will use all wildcard streams of the chosen stream and streamname+wildcard will just use the chosen wildcard stream.

Target

Target is the push method. To record you must choose a path/file.flv or path/file.ts, for example /media/recordings/recording01.ts.

Target options

We have two types of target options, variables and parameters. Variables are replaced by their corresponding variable upon push/recording while parameters change the way the recording is handled. Variables are used by $variable while parameters should be used as ?parameter, however, using multiple parameters follows a different syntax: ?parameter01&parameter02&parameter03&etc. Variables are available in every MistServer that has pushing functionality, while parameters will be available from MistServer version 2.13 and up.

Variables

VariableBehaviour
$streamreplaced by the `streamname+wildcard` in the file name
$dayreplaced by the day number in the file name
$monthreplaced by the month number in the file name
$yearreplaced by the year number in the file name
$hourreplaced by the hour (00-23) at recording start in the file name
$minutereplaced by the minute (00-59) at recording start in the file name
$secondsreplaced by the seconds (00-59) at recording start in the file name
$datetimereplaced by $year.$month.$day.$hour.$minute.$seconds at recording start in the file name

Parameters

ParameterBehaviour
?recstart=time_in_MSStarts the recording at the closest frame when the given time in milliseconds is reached.
?recstop=time_in_MSStops the recording at the closest frame after the given time in milliseconds is reached
?recstartunix=time_in_SECONDSStarts the recording at the closest frame when the given UNIX time in seconds is reached
?recstopunix=time_in_SECONDSStops the recording at the closest frame when the given UNIX time in seconds is reached
?passthrough=1Activates passthrough: All inputs will be used in the push/recording.

Things to keep in mind, common mistakes

Here I will list a few things that are handy to keep in mind or usually go wrong the first time when setting up recordings.

Variables are necessary to not automatically overwrite automatic recordings

Without using variables you will overwrite your automatic recordings the second your stream source becomes active. Adding a date and timestamp will make sure your file gets an unique enough name to avoid this. This is especially important if your source tends to be unstable and restarts when recording.

When using recstart/stop parameters, the timestamp used is not necessary the same timestamp used by players

Players usually start their playback starting at 0 seconds no matter what the stream data says, but depending on the source a recording can contain information starting from a timestamp higher than 0ms. A stream could for example claim to start at timestamp 120000 if you started recording after the stream was active for 2 minutes already. This will mean you will have to account for this timestamp as well.

When using recstart/stopunix the unix time between machines doesn’t have to be the same

The unix time used is the unix time of the machine running MistServer, this time is changeable depending on the machine settings so make sure you do not assume that this time is the same over every machine.

Make sure you have write access to the folder you are writing

It sounds obvious, but MistServer will not be able to record anything if it is not allowed to write files. Make sure the given path is a path that can be used.

If you start a single record of a live stream before it is available you have about 10 seconds before it gets auto removed

Any streams that are not available within 10 seconds of starting the recording will be assumed not working/active and removed. Automatic recordings should be used if you do not plan to set the source live right before or after setting the recording.

Make sure the protocol you want to record as is active and available

If MistServer cannot access the protocol you want to record in (HTTPTS for TS and FLV for FLV) it will not be able to record the file at all.

A few examples

Lastly I will leave a few examples of how you could set up recordings.

Automatic recording of wildcard streams with their name+month.day.hour.minute added

Stream: live+

Target: /media/recording/$stream$month$day$hour$minute.ts

Automatic recording of wildcard streams and automatically grabbing any multibitrate tracks

Stream: live+

Target: /media/recording/$stream.ts?passthrough=1

Recording of a stream as myrecording.ts starting at 1 minute for 2 minutes

Stream:live

Target: /media/recording/myrecording.ts?recstart=60000&recstop=180000

Automatic recording of the wildcard stream live+specific_stream as a multibitrate ts file named mymultibitratefile.ts

Stream:live+specific_stream

Target: /media/recording/mymultibitratefile.ts?passthrough=1

That should cover you for most uses when recording. The next blog will be by Carina covering how to set up MistServer when you've got to work with a huge library

read moreless
19 Oct 2017

[Blog] Setting up a transcoder

Hello everyone! It's been a while since our latest post due to the IBC2017, but we're working to catch up on that again. This post will be about a question we've heard quite a bit over the last few years: "How...

Hello everyone! It's been a while since our latest post due to the IBC2017, but we're working to catch up on that again.

This post will be about a question we've heard quite a bit over the last few years: "How do we use a transcoder with MistServer?", and today I will show you two ways to do this: our trigger system, and our up-and-coming ts-exec feature.

Trigger system

Using the trigger system, we can set up a script to execute whenever a new track is added, using the STREAM_TRACK_ADD trigger. This trigger contains as a payload the name of the stream, and the id of the track that is being added. The following script could serve as a starting point, taking a stream and pushing an extra quality in 720p, 1mbit:

#!/bin/bash

HOST="localhost"
HTTPPORT=8080
LOGDIR="/tmp/"

read streamname
read tracknumber

echo -e "Trigger $1:\n${streamname} : --${tracknumber}-- ? --${vTrack}--\n" >> ${LOGDIR}/triggerLog

vTrack=`curl -s  http://${HOST}:${HTTPPORT}/info_${streamname}.js -o - | sed -n -e 's/mistvideo\[.*\] = {/{/gp' | jq .meta.tracks 2> /dev/null | grep "video_" | sed -e 's/.*_\(.*\)":.*/\1/g'`

if [ ${vTrack} == ${tracknumber} ]; then
  echo "Starting encode" >> ${LOGDIR}/triggerLog

  ffmpeg -i http://${HOST}:${HTTPPORT}/${streamname}.mp4?rate=0 -muxdelay 0 -c:v h264 -b:v 1M -s:v 1280x720 -an -f flv rtmp://${HOST}/push/${streamname} 2> ${LOGDIR}/ffmpeg_log
fi
  • Lines 3-5: Set up the parameters used further on in the script, this would need to be the host and port that MistServer is available on, as well as a local directory where the logs will be kept.
  • Lines 7-8: Read the streamname and tracknumber from standard input, where our trigger system sends this data.
  • Line 10: Write the trigger data to the logfile.
  • Line 12: Use curl and jq to retrieve the stream info from mistserver, and extract the id of the video track(s) if any.
  • Line 14: Check if the added track is the only video track.
  • Line 15: Added the 'encode start' to the logfile
  • Line 17: Start ffmpeg to push to the same stream, adding tracks to the already available data. (See our older post for a detailed explanation on how to use ffmpeg)

The logfile is used mainly for debug purposes, if you decide to use this script in production, please make sure logging is removed, or handled properly.

ts-exec

Starting with our next release (2.13), we will make things even more straightforward with our ts-exec: feature for push outputs. This feature allows for a script or program that can receive raw ts data over standard input to be started by MistServer, and combining this with the auto push system allows to start an encode the moment a new stream becomes available.

Using this feature to adapt a stream can run your encoder whenever needed, using ffmpeg on a wildcard stream input for an example gives the following push target:

ts-exec:ffmpeg -i - -copyts -muxdelay 0 -c:v:0 h264 -b:v:0 1M -s:v:0 1280x720 -an -f flv rtmp://localhost/push/$stream

That's all there really is to it. Ofcourse, starting multiple encodes to create a multibitrate stream, or coupling in a different encoder will require some tweaks. For more details and questions, feel free to contact us.

See you all next time, our next blog will come up in a couple of days.

read moreless
6 Sep 2017

[Blog] An introduction to OTT: What is OTT anyway?

Hello everyone, IBC2017 is around the corner and with it comes a lot of preparation, whether it is familiarizing yourself with the layout, setting up your schedule or even double checking requirements/specifications of that special something you'll need to add...

Hello everyone,

IBC2017 is around the corner and with it comes a lot of preparation, whether it is familiarizing yourself with the layout, setting up your schedule or even double checking requirements/specifications of that special something you'll need to add to make your project complete. One of the most troubling aspects of the OTT branch is that it is loaded with jargon and covers a tremendously broad spectrum of activities possible which makes it easy to confuse things. With all the possible uses of OTT we thought it might be a good idea to discuss the basics of any OTT project. Luckily in an attempt to make the core basics of OTT a bit more clear our CTO Jaron gave a short presentation covering this at last year's IBC2016. Below is a transcript of his presentation.

slides

SLIDE1

What is OTT anyway?

OTT literally stands for Over The Top, which doesn't really tell you much about what it actually is. so to clarify that, it’s anything that is not delivered through traditional cable or receivers, so non traditionally television. To put it simply: video over the internet.

Now I’m going to be using the word media throughout this presentation instead of video as it could also be audio data, metadata or other data similar to that.

To add to that, Internet protocols, set top boxes, HBBTV and similar solutions are also technically examples of OTT even though they are associated with traditionally cable provides and may use a traditional cable receiver to provide it.

We will generalize this by saying OTT is internet based delivery.

SLIDE2

Some of the important topics are Codecs, Containers and Transport.

A codec is a way to encode video or audio data, to make it take less space for sending over the internet. As raw data is just not doable.

Containers are methods to store that encoded data and put it in something that can be send over the internet.

Transport is the method you send it over the internet with.

SLIDE3

Again: Codecs are the method to encode media for storage/transport.

There’s several popular ones, there’s more obviously, but I’ll list some of the popular ones here. For video you have: H264, the current most popular choice. H265, better known as HEVC, which is the up and coming many people are already switching to this. Then there’s VP8/9 which are the ones Google has been working on. Kind of a competitor to HEVC.

They all make video smaller, but have their individual differences.

For audio you have: AAC, the current most popular OPUS, what i personally think is the holy grail of audio codecs, it can do anything. MP3, it’s on the way out, but everyone knows it which is why it’s mentioned Dolby/DTS, popular choices for surround sound, which are not used over the internet often as most computers are not connected to a surround sound installation.

For subtitles you have: Subrip, which is the format usually taken from DVDs WebVTT, is more or less the Apple equivalent for this. There’s more, but there’s so many it’s impossible to list.

There’s upcoming video codecs: AV1, which is basically a mixture of VP10, Daala and Thor, all codecs in development merged together in what should be the holy grail for video codecs. Since they’ve decided to merge projects together it’s unclear how fast development is going. I expect them to be finished 2017 - 2019’ish.

SLIDE4

So how do you pick a codec?

The main reason to pick a codec is convenience. It could be that it’s already encoded in that format or it’s easy to switch to it.

Another big reason is the bitrate, newer codecs generally have a better quality per pit and as internet connections usually have a maximum speed it’s really important to make sure you can send the best quality possible in the least amount of data.

Hardware support is another big reason. Since encoding and decoding is a really processor intensive operation you will want to have hardware acceleration for it. For example watching an H265 HD video would melt any smartphone without hardware support.

Container/transport compatibility, which is really convenient in a way. Some containers or transport can only support a certain set of transport, which means you’re stuck with picking that particular one.

SLIDE5

Which brings us to Containers.

Containers dictate how you can mix codecs together in a single stream or file. Some of the pupular choices are:

MPEG TS, which is often used for traditional broadcast

ISO MP4, I think everyone is familiar with this one.

MKV/WebM, enthusiasts of japanese series usually use this as it has excellent subtitle support

Flash, which i consider a container even though it’s not technically a container. Because FLV and RTMP which are the flash formats have the same limitations from each other and limit what you can pick as well.

(S)RTP, which i consider a container even though it’s technically a transport method because it’s common among different transport methods as well.

SLIDE6

That brings us to transport methods.

These say how codecs inside their container are transported over the internet. This is the main thing that has an impact on what the quality of your delivery will be.

I’ve split this into three different types of streaming.

True streaming protocols, RTSP, RTMP, WebRTC. What these do is what i consider actual streaming. You connect to them over something proprietary because all of these are protocols that are not by default integrated in players of devices yet, WebRTC should be in the future. As a pro they have a really fast start time, really low latency, they’re great for live. However they need a media server or web server extension to work and they usually, though not always, have trouble breaking through firewalls. Technically the best choice for live, but there’s a lot of but’s in there.

Pseudo streaming protocols, this is when you take a media file and you bit for bit deliver it, not all at once, but you stream it to the end delivery point. Doing so gives you the advantage of having low latency and high compatibility (it can pretend to be a file download) on the other hand you still need a media server or web server extension to deliver this format. It’s slightly easier though and there’s no firewall problems.

Segmented HTTP, which is the current dominant way to deliver media. You see all the current buzzwords of HLS, DASH and fragmented MP4 in there. They are a folder of different segments of video files and each of these segments contains a small section of the file. This has a lot of advantages, it’s extremely easy to proxy, you can use a web server for delivery, but they have the really big disadvantage of having a slow start up time and really high latency. For example HLS is in practise between 20 and 40 seconds of delay. Which is unacceptable for some types of media. All Segmented HTTP transport methods have the same kind of delay, some are a little faster, but you’ll never get subsecond with these.

SLIDE7

End of presentation

read moreless
21 Aug 2017

[Blog] An introduction to encoding and pushing with ffmpeg

Hello everyone, Balder here. This time I'd like to talk doing your own encodes and stream pushes. There's a lot of good options out there on the internet, both paid and open source. To cover them would require several posts,...

Hello everyone, Balder here. This time I'd like to talk doing your own encodes and stream pushes. There's a lot of good options out there on the internet, both paid and open source. To cover them would require several posts, so instead I'll just talk about the one I like the most: ffmpeg.

Ffmpeg, the swiss army knife of streaming

Ffmpeg is an incredibly versatile piece of software when it comes to media encoding, pushing and even restreaming other streams. Ffmpeg is purely command line, which is both its downside and strength. It'll allow you to use it in automation on your server, which can be incredibly handy for any starting streaming platform. The downside is that as you'll have to do this without a graphical interface, the learning curve is quite high, and if you don't know what you're doing you can worsen the quality of your video or even make it unplayable. Luckily the basic settings are really forgiving for starting users, and as long as you keep the original source files you're quite safe from mistakes.

So what can ffmpeg do?

Anything related to media really: there's little that ffmpeg can't do. If you have a media file/stream you'll be able to adjust the codecs however you want, change the media format and record it to a file or push it towards a media server. A big benefit to this is that it allows you to bypass known problems/issues that certain protocols or pushing methods have. We use it mostly as an encoder and pushing application ourselves; it's deeply involved in all of our tests.

The basics of ffmpeg

Ffmpeg has too many options to explain easily to new users, so we'll go over the bare minimum required to encode and push streams with ffmpeg. Luckily ffmpeg has a wide community and information on how to do something is easily found through your preferred search engine. Every ffmpeg command will follow the following syntax:

ffmpeg input(s) [codec options] output(s)

Input

Input is given by -i Input.file/stream_url. The input can be a file or a stream, of course you'll have to verify the input you're filling in exists or is a valid url, but that's how far the difficulty goes. A special mention goes to the "-re" option to read a file in real-time: it'll allow you to use a media file as "live” source if you wish to test live streaming without having a true live source available.

Codec

A huge array of options is available here, however we'll only cover two things: Copying codecs and changing codecs to H264 for video and AAC for audio. As H264/AAC are the most common codecs at the moment there's a good chance you already have these in your files, otherwise re-encoding them to H264/AAC with the default settings of ffmpeg will almost certainly give you what you want. If not feel free to check out the ffmpeg documentation here. To specify the video or audio codec, use -c:v for video or -c:a for audio. Lastly, you can use -c to choose all codecs in the file/stream at once.

Copying codecs (options)

The copying of codecs couldn't be easier with ffmpeg. You only need to use -c copy to copy both the video and audio codecs. Copying the codecs allows you to ingest a media file/stream and record it or change it to a different format. You can also only copy a specific codec like this: -c:v copy for video and -c:a copy for audio. The neat thing about using the copy option is that it will not re-encode your media data, making this an extremely fast operation.

Encoding to H264/AAC (options)

Ffmpeg allows you to change the video and audio track separately. You can copy a video track while only editing the audio track if you wish. Copying is always done with the copy codec. An encode to H264/AAC is done by using the option -c:a aac -strict -2 -c:v h264. Older versions of ffmpeg might require the equivalent old syntax -acodec aac -strict -2 -vcodec h264 instead.

Output

The output of ffmpeg can either be a file or a push over a stream url. To record it to a file use outputfile.ext; almost any media type can be chosen by simply using the right extension. To push over a stream use -f FORMAT STREAM_URL. The most commonly used format for live streaming will be RTMP, but RTMP streams internally use the FLV format. That means you'll have to use -f flv rtmp://SERVERIP:PORT/live/STREAMNAME for this. Other stream types may auto-select their format based on the URL, similar to how this works for files.

Examples

  • FLV file Input to MP4 file, copy codecs

ffmpeg -i INPUT.flv -c copy OUTPUT.mp4

In all the examples below we'll assume you do not know the codecs and will want to replace them with H264/AAC.

  • RTMP stream Input to FLV file, reencode

ffmpeg -i rtmp://IP:PORT/live/STREAMNAME -c:a aac -strict -2 -c:v h264 OUTPUT.flv

  • MP4 file Input to RTMP stream, reencode

ffmpeg -re -i INPUT.mp4 -c:a aac -strict -2 -c:v h264 -f flv rtmp://IP:PORT/live/STREAMNAME

  • HLS stream input to RTMP stream, reencode

ffmpeg -i http://IP:PORT/hls/STREAMNAME/index.m3u8 -c:a aac -strict -2 -c:v h264 -f flv rtmp://IP:PORT/live/STREAMNAME

  • MP4 file input to RTSP stream, reencode

ffmpeg -re -i INPUT.mp4 -c:a aac -strict -2 -c:v h264 -f rtsp -rtsp_transport tcp rtsp://IP:PORT/STREAMNAME

  • HLS stream input to RTSP stream, reencode

ffmpeg -i http://IP:PORT/hls/STREAMNAME/index.m3u8 -c:a aac -strict -2 -c:v h264 -f rtsp -rtsp_transport tcp rtsp://IP:PORT/STREAMNAME

  • RTSP stream input over TCP to RTMP stream, copy

*Using ffmpeg to ingest over TCP instead of UDP makes sure you don't have the packet loss problem that UDP has and gives a better and more stable picture for your stream. -rtsp_transport tcp -i CameraURL -c copy -f flv rtmp://IP:PORT/live/STREAMNAME

Creating a multibitrate stream from a single input

This one is a bit advanced, but often asked for so I've opted to include it. In order to fully understand the command you'll need some explanation however. It's important to keep in mind that you will need to tell ffmpeg how many video/audio tracks you want and which track should be used as source for the options later on. First you start with mapping and selecting input tracks then you describe the encoder settings per track.

ffmpeg -i INPUT -map a:0 -map v:0 -map v:0 -c:a:0 copy -c:v:0 copy -c:v:1 h264 -b:v:1 250k -s:v:1 320x240 OUTPUT

To explain all of those options in more detail:

  • INPUT can be either a file or a stream for input
  • OUTPUT can be either a file or a stream for output
  • -map a:0 selects the first available audio track from the source
  • -map v:0 selects the first available video track from the source
  • -map v:0 selects the first available video track from the source a second time
  • -c:a:0 copy tells ffmpeg to copy the audio track without re-encoding it
  • -c:v:0 copy tells ffmpeg to copy the video track without re-encoding it
  • -c:v:1 h264 tells ffmpeg to also re-encode the video track in h264
  • -b:v:1 250k tells ffmpeg that this second video track should be 250kbps
  • -s:v:1 320x240 tells ffmpeg that this second video track should be at 320x240 resolution

You can keep adding video tracks or audio tracks by adding -map v:0 or -map a:0 just be sure to set the encoder options for every track you add. You can select the second input video or audio track with -map v:1 or -map a:1 and so forth for additional tracks.

Well that was it for the basics when using ffmpeg. I hope it helps you when you want to try and get yourself familiarized with it. The next blog will be released in the same month as the IBC, with all the new exciting developments being shown we think it's a good idea to go back to the basics of OTT so we'll be releasing Jaron’s presentation "What is OTT anyway?" given at last year's IBC2016, covering the basics of OTT at a level for people that are just getting started with streaming.

read moreless
4 Aug 2017

[Release] Stable release 2.12 now available!

Hello everyone! Stable release 2.12 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers will receive a new build notification in their e-mail. Here are some highlights: (Better) support for the PCM, Opus, MPEG2...

Hello everyone! Stable release 2.12 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers will receive a new build notification in their e-mail.

Here are some highlights:

  • (Better) support for the PCM, Opus, MPEG2 (Pro-only) and MP2 (Pro-only) codecs.
  • Raw H.264 Annex B input and output support
  • exec-style inputs (autostarts/stops a given command to retrieve stream data) for TS (Pro-only) and raw H.264
  • Pro only: HEVC support in RTSP input and output
  • Pro only: TS over HTTP input support
  • Subnet mask support for push input whitelisting
  • Improved support for quickly stopping and restarting an incoming push
  • Many other small fixes and improvements!
read moreless
1 Aug 2017

[Blog] Raw H.264 from Raspberry Pi camera to MistServer

Hello everyone, Balder here. As you might know we have native ARMv7 and ARMv8 builds since MistServer 2.11, this allows for an easier install of MistServer on the Raspberry Pi. As a 3d printing enthusiast I use my Raspberry Pi...

Hello everyone, Balder here. As you might know we have native ARMv7 and ARMv8 builds since MistServer 2.11, this allows for an easier install of MistServer on the Raspberry Pi. As a 3d printing enthusiast I use my Raspberry Pi quite a lot to monitor my prints, but I was a bit let down by quality and stability of most solutions as many panics were luckily solved simply by pressing "f5" as it was the camera output, not the print that failed.

I needed a better solution and with the recent native MistServer ARM release we had the perfect excuse to try and do something directly with the Raspberry Pi cam. One of the new features we have in MistServer 2.12 is raw H264 support for all editions of MistServer. Below I will be describing just how I am using this new feature.

Ingesting Raspberry Pi Cam directly into MistServer

Ingesting the RaspiCam is a little different from other inputs, as instead of an address we will be filling in the command to generate video and dump the video to standard output. It will look something like this:

Image of stream configurations within MistServer to use Raspberry Pi camera as input

As you can see our new input is used by using: h264-exec: Which will run the rest of the line as a shell command and ingest its output as raw Annex B H.264. Do note that there is no support for shell escapes whatsoever so if you need to use spaces or other types of escapes inside your arguments run a script instead and put those in the script.

Raspivid is the most direct method of ingesting the Raspberry Pi camera footage in H264 format, powered by the hardware encoder. If your binary is not in your path, you might need to install the binaries or enter the full path. For example in ArchLinuxARM the path is /opt/vc/bin/raspivid instead of just plain raspivid

Recommended settings for ingesting RaspiCam

There’s a few settings we recommend or are downright required in order to ingest the camera correctly.

As you can see from our example we use the following:

h264-exec:raspivid -t 0 -pf high -lev 4.2 -g 10 -ih -qp 35 -o -

Required settings

  • -t 0

    This disables the timeout, to make sure the stream keeps going.

  • -ih

    This will insert the headers inline, which MistServer requires in order to ingest the stream correctly.

  • -o -

    This will output the video to standard output, which MistServer uses to ingest.

Recommended settings

  • -pf high

    This sets the H264 profile to high, we tend to see better qualities with this setting.

  • -lev 4.2

    This sets the H264 profile to 4.2 (our highest option), which tends to be better supported among protocols than our other options

  • -g 10

    This setting makes sure there is a keyframe every 10 frames, which tends to make the camera more live, you can lower the number for an even more live stream, but bandwidth costs will be raised.

  • -qp 35

    This sets the quality of the stream, we tend to see the best results around 35, but it is a personal setting.

Watching your live stream

If filled in correctly, you should now see your stream show up when you try to watch it. The neat thing about this method is that it will only start ingesting the stream if you try to watch it, so you are not wasting resources for a stream that is not viewed, attempts a restart in the event raspivid crashes and closes automatically when no one is watching it anymore.

Image of Raspberry Pi camera live footage playing

I am pretty happy with my new method to monitor my prints, as it has been more reliable than my older set up of using mjpeg-streamer and using the media server functionality of MistServer allows me to monitor my prints more easily from outside my home network as this method is quite a lot better in terms of bandwidth and quality.

Well that was all for this post, see you all next time!

read moreless
18 Jul 2017

[Blog] Load balancing especially for media servers

Hello streaming media enthusiasts! Jaron here, with a new background article. In this blog post, I'll detail the why and how of our load balancing solution, which is currently being beta-tested in some of our clients' production deployments. Quick primer on load...

Hello streaming media enthusiasts! Jaron here, with a new background article. In this blog post, I'll detail the why and how of our load balancing solution, which is currently being beta-tested in some of our clients' production deployments.

Quick primer on load balancing and how it relates to media

The concept of load balancing is not new; not by a long shot. It is probably one of the most researched and experimented-upon topics in computing. So, obviously, the load balancing problem has already been thoroughly solved. There are many different techniques and algorithms, both generic and specific ones.

Media is a very particular type of load, however: it highly benefits from clustering. This means that if you have a media stream, it's beneficial to serve all users for this stream from the same server. Serving the same stream from multiple servers increases the load much more than serving multiple users from a single server. Of course that changes when there are so many users connected that a single server cannot handle the load anymore, as you will be forced to spread those users over multiple servers. Even then, you will want to keep the amount of servers per stream the lowest possible, while spreading the users more or less equally over those servers.

Why is this important?

Most traditional load balancing methods will either spread randomly, evenly, or to whatever server is least loaded. This works, of course. However, it suffers greatly when there is a sudden surge of new viewers coming in. These viewers will either all be sent to the same server, or spread over all servers unnecessarily. The result of this is sub-optimal use of the available servers... and that means higher costs for a lesser experience for your end-users.

Load balancing, MistServer-style

MistServer's load balancing technique is media-aware. The load balancer maintains constant contact with all servers, receiving information on bandwidth use, CPU use, memory use, active streams, viewer counts per stream and bit rates per stream. It uses these numbers to preemptively cluster incoming requests on servers while making predictions on bandwidth use after these users will connect. This last trick in particular allows MistServer to handle surges of new users correctly without overloading any single server. Meanwhile, it constantly adjusts its predictions with new data received from the servers, accounting for dropped users and changes in bandwidth patterns over time.

A little extra

Our load balancer also doubles as an information source for the servers: they can make queries as to what server to best pull a feed from for the lowest latency and highest efficiency. As an extra trick, the load balancer (just like MistServer itself) is fully capable of making any changes to its configuration without needing to be restarted, allowing you to add and remove servers from the list dynamically without affecting your users. Last but not least, the load balancer also provides merged statistics on viewer counts and server health for use in your own monitoring and usage tracking systems.

Want to try it?

As mentioned in the introduction, our load balancer is still in beta. It's fully stable and usable; we just want to collect a little bit more data to further improve its behavior before we release it to everyone. If you are interested in helping us make these final improvements and/or in testing cool new technology, contact us and let us know!

Wrapping up

That was it for this post. Next time Balder will be back with a new subject!

read moreless
3 Jul 2017

[Blog] Building an access control system with triggers and PHP

Hello everyone, Carina here. In this blog I will discuss how PHP can be used to inform MistServer that a certain user has access to streams through the use of triggers. The Pro version is required for this. The process Before I...

Hello everyone, Carina here. In this blog I will discuss how PHP can be used to inform MistServer that a certain user has access to streams through the use of triggers. The Pro version is required for this.

The process

Before I go into the implementation details, let's take a look at what the user validation process is going to be like.

  1. The process starts when a user opens a PHP page, which we shall call the video page
  2. The website knows who this user is through a login system, and uses their userid and their ip to generate a hash, which is added to video urls on the video page
  3. The video is requested from MistServer, which fires the USER_NEW trigger
  4. The trigger requests another PHP page, the validation page, which checks the hash and returns whether or not this user is allowed to watch streams
  5. If the user has permission to watch, MistServer sends the video data

Alright, on to implementation.

The video page

Please see the sample code below.

First, the user id should be retrieved from an existing login system. In the sample code, it is set to 0, or the value provided through the user parameter. Then, the user id and ip are hashed, so that the validation script can check if the user id hasn't been tampered with. Both the user id and the hash will be appended to the video url, so that MistServer can pass them along to the validation script later.
Next, the video is embedded into the page.

<?PHP

  //some general configuration
  //where MistServer's HTTP output can be found
  $mist_http_host = "http://<your website>:8080";
  //the name of the stream that should be shown
  $stream = "<stream name>";

  //set the userid
  //in a 'real' situation, this would be provided through a login system
  if (isset($_REQUEST["user"])) { $user_id = intval($_REQUEST["user"]); }
  else { $user_id = 0; }

  //create a hash containing the remote IP and the user id, that the user id can be validated against
  $hash = md5($_SERVER["REMOTE_ADDR"].$user_id."something very very secret");

  //prepare a string that can pass the user id and validation hash to the trigger
  $urlappend = "?user=".$user_id."&hash=".$hash;

  //print the embed code with $urlappend where appropriate
?>
<div class="mistvideo" id="tmp_H0FVLdwHeg4j">
  <noscript>
    <video controls autoplay loop>
      <source
        src="<?PHP echo $mist_http_host."/hls/".$stream."/index.m3u8".$urlappend; ?>"
        type="application/vnd.apple.mpegurl">
      </source>
      <source
        src="<?PHP echo $mist_http_host."/".$stream.".mp4".$urlappend; ?>"
        type="video/mp4">
      </source>
    </video>
  </noscript>
  <script>
    var a = function(){
      mistPlay("<?PHP echo $stream; ?>",{
        target: document.getElementById("tmp_H0FVLdwHeg4j"),
        loop: true,
        urlappend: "<?PHP echo $urlappend; ?>"
      });
    };
    if (!window.mistplayers) {
      var p = document.createElement("script");
      p.src = "<?PHP echo $mist_http_host; ?>/player.js"
      document.head.appendChild(p);
      p.onload = a;
    }
    else { a(); }
  </script>
</div>

Trigger configuration

Sample configuration of the USER_NEW trigger

In the MistServer Management Interface, a new trigger must be added, using the following settings:

  • Trigger on: USER_NEW
  • Applies to: Check the streams for which user validation will be required, or don't check any streams to require validation for all streams
  • Handler: The url to your validation script
  • Blocking: Check.
    This means that MistServer will use the scripts output to decide whether or not to send data to the user.
  • Default response: 1 or 0.
    If for some reason the script can't be executed, this value will be used by MistServer. When this is set to 1, everyone will be able to watch the stream in case of a script error. When set to 0, no one will.

The validation page

Please see the sample code below.
It starts off by defining what the script should return in case something unexpected happens. Here I've chosen for this to be 0 (don't play the video), as this probably occurs when someone is trying to tamper with the system.
Next, some functions are defined that will make the rest of the script more readable.

The first actual action is to check whether the script was called by one of the supported triggers. The trigger type is sent by MistServer in the X_TRIGGER request header.

Next, the payload that MistServer has sent in the request body is retrieved. The payload contains variables like the stream name, ip, protocol, etc. These will be used shortly, starting with the protocol.
When the protocol is HTTP or HTTPS, the user is probably trying to request javascript or CSS files that are required for the meta player. There is no reason to deny someone access to these, and thus the script informs MistServer it is allowed.

If the protocol is something else, the user id and hash are retrieved from the request url, which is passed to the validation script in the payload.
The hash is compared to what the hash should be, which is recalculated with the provided payload variables. If the user id has been tampered with or if the request is from another ip, this check should fail and MistServer is told not to send video data.

Otherwise, the now validated user id can be used to evaluate if this user is allowed to watch streams.

<?PHP
  //what the trigger should return in case of errors
  $defaultresponse = 0;

  ///\function get_payload
  /// the trigger request contains a payload in the post body, with various information separated by newlines
  /// this function returns the payload variables in an array
  function get_payload() {

    //translation array for the USER_NEW (and CONN_OPEN, CONN_CLOSE, CONN_PLAY) trigger
    $types = Array("stream","ip","connection_id","protocol","request_url","session_id");

    //retrieve the post body
    $post_body = file_get_contents("php://input");
    //convert to an array
    $post_body = explode("\n",$post_body);

    //combine the keys and values, and return
    return array_combine(array_slice($types,0,count($post_body)),$post_body);
  }

  ///\function no_ffffs
  /// removes ::ffff: from the beginning of an IPv6 that is actually an IPv6, so that it gives the same result
  function no_ffffs($str) {
    if (substr($str,0,7) == "::ffff:") {
      $str = substr($str,7);
    }
    return $str;
  }

  ///\function user_can_watch
  /// check whether a user can watch streams
  ///\TODO replace this with something sensible ;)
  function user_can_watch ($userid) {
    $can_watch = Array(1,2,3,6);
    if (in_array($userid,$can_watch)) { return 1; }
    return 0;
  }


  //as we're counting on the payload to contain certain values, this code doesn't work with other triggers (and it wouldn't make sense either)
  if (!in_array($_SERVER["HTTP_X_TRIGGER"],Array("USER_NEW","CONN_OPEN","CONN_CLOSE","CONN_PLAY"))) {
    error_log("This script is not compatible with triggers other than USER_NEW, CONN_OPEN, CONN_CLOSE and CONN_PLAY");
    die($defaultresponse);
  }




  $payload = get_payload();

  //always allow HTTP(S) requests
  if (($payload["protocol"] == "HTTP") || ($payload["protocol"] == "HTTPS")) { echo 1; }
  else {

    //error handling
    if (!isset($payload["request_url"])) {
      error_log("Payload did not include request_url.");
      die($defaultresponse);
    }

    //retrieve the request parameters
    $request_url_params = explode("?",$payload["request_url"]);
    parse_str($request_url_params[1],$request_url_params);

    //more error handling
    if (!isset($payload["ip"])) {
      error_log("Payload did not include ip.");
      die($defaultresponse);
    }
    if (!isset($request_url_params["hash"])) {
      error_log("Request_url parameters did not include hash.");
      die($defaultresponse);
    }
    if (!isset($request_url_params["user"])) {
      error_log("Request_url parameters did not include user.");
      die($defaultresponse);
    }

    //validate the hash/ip/userid combo
    if ($request_url_params["hash"] != md5(no_ffffs($payload["ip"]).$request_url_params["user"]."something very very secret")) {
      echo 0;
    }
    else {

      //the userid is valid, let's check if this user is allowed to watch the stream
      echo user_can_watch($request_url_params["user"]);

    }
  }

That's all folks: the validation process has been implemented.

Further information

The example described above is written to allow a user access to any streams that are served through MistServer. If access should be limited to certain streams, there are two ways to achieve this.
The simplest option is to configure the USER_NEW trigger in MistServer to only apply to the restricted streams. With this method it is not possible to differentiate between users (UserA can watch stream1 and 3, UserB can watch stream2).
If differentiation is required, the user_can_watch function in the validation script should be modified to take into account the stream that is being requested (which is included in the trigger payload).

The USER_NEW trigger is only fired when a user is not yet cached by MistServer. If this is not desired, for example when the validation script also has a statistical purpose, the CONN_PLAY trigger can be used instead. It should be noted however, that for segmented protocols such as HLS, CONN_PLAY will usually fire for every segment.

More information about the trigger system can be found in chapter 4.4 of the manual.
If any further questions remain, feel free to contact us.

Next time, Jaron will be discussing our load balancer.

read moreless
26 Jun 2017

[Blog] AV1

Hello, this is Erik with a post about the up and coming AV1 video codec. While none of the elements that will eventually be accepted into the final design are currently fixed (and the specification of the codec might not...

Hello, this is Erik with a post about the up and coming AV1 video codec. While none of the elements that will eventually be accepted into the final design are currently fixed (and the specification of the codec might not even be finalized this year), more and more companies have started working on support already. This post is meant to give a more in-depth view of what is currently happening, and how everything will most likely eventually fall into place.

NetVC

Internet Video Codec (NetVC) is a working group within the Internet Engineering Task Force (IETF). The goal of the group is to develop and monitor the requirements for the standardization of a new royalty-free video codec.

The current version of their draft specifies a variety of requirements of the next generation royalty-free codec. These range from low-latency real-time collaboration to 4k IPTV, and from adaptive bit rate to the existence of a real-time software encoder.

While these requirements are not bound to a specific codec, it determines what would be needed for a codec to be deemed acceptable. Techniques from the major contenders have been taken into account whilst setting up this list in 2015, with the Alliance for Open Media (AOMedia) forming shortly after to develop the AV1 codec to comply to these requirements in a joint effort between the Daala, Thor and VP10 teams.

AV1

The AOMedia has been formed to streamline development of a new open codec, as multiple groups were working simultaneously on different new codecs. AV1 is the first codec to be released by the alliance. With many of the influential groups within the internet video area participating in its development, it will be set-up to compete with HEVC in regard to compression and visual quality while remaining completely royalty free.

Because steering completely clear of the use of patents when designing a codec is a daunting task, the AOMedia group has decided to provide a license to use their IP in any project using AV1, as long as the user does not sue for patent infringement. This, in combination with a thorough Intellectual Property Rights review, means that AV1 will be free of royalties. This should give it a big edge over the main competitor HEVC, for which there are multiple patent pools one needs to pay license fees to, with an unknown amount of patents not contained in these pools.

Decided is to take the development of the VP10 codec as developed by Google as the base for AV1. In addition to this, AV1 will contain techniques from both Daala (by Xiph and Mozilla) and Thor (by Cisco).

The AV1 codec is developed to be used in combination with the opus audio codec, and wrapped in WebM for HTML5 media and WebRTC for real time uses. As most browsers already support both the WebM format and opus codec, this immediately generates a large potential user base.

Experiments

Development of AV1 is based mainly around the efforts of Google's VP10. Built upon VP9, VP10 was geared towards better compression while optimizing visual qualities in the encoded bitstream. With the formation of AOMedia, Google decided to drop VP10 development, and instead focus on solely the development of AV1.

Building upon a VP9 core, contributors can submit so called experiments to the repository, which can then be enabled and tested by the community. Based on whether the experiment is considered worthwhile, it enters IPR review, and after a successful pass there, it will be enabled by default and added to the specification. Most experiments have come from the respective developments from VP10, Daala and Thor, but other ideas are welcomed as well. As of yet, no experiments have been finalized.

Performance

Multiple tests have been run to compare AV1 to both H.264 and HEVC, with varying results over time. However, with no final selection of experiments, these performance measures have been made not only with different settings, but with completely different experiments enabled.

While it is good to see how the codec is developing over time, a real comparison between AV1 and any other codec can not be reliably made until the bitstream is frozen.

What's next?

With the end of the year targeted for finalizing the bitstream, the amount of people interested in the codec will probably only grow. With the variety of companies that AOMedia consists of, the general assumption is that the adoption of the codec and hardware support for encoding/decoding will be made available in a matter of months after the bitstream is finalized, rather than years.

While VP10 has been completely replaced by the AV1 codec, the same does not seem to hold for Thor and Daala. Both projects still see respective development, which does not seem limited to the features that will eventually be incorporated into AV1.

And that concludes it for now. Our next post will be by Carina, who will show how to create an access control system in PHP with our triggers.

read moreless
1 Jun 2017

[Blog] Fantastic protocols and where to stream them

Hello everyone, Balder here. As mentioned by Carina I will talk about streaming protocols and their usage in streaming media. As the list can get quite extensive I will only focus on protocols in common use that handle both video...

Hello everyone, Balder here. As mentioned by Carina I will talk about streaming protocols and their usage in streaming media. As the list can get quite extensive I will only focus on protocols in common use that handle both video and audio data.

Types of protocols

To keep it simple, a protocol is the method used to send/receive media data. You can divide the protocol in two types: - Stream based protocol - File based protocol

Stream based protocol

Stream based protocols are true streaming protocols. It has two-way communication between the server and the client and maintain an open connection. This allows for faster, more efficient delivery and lower latency. These properties make it the best option, if available. On the downside you need a player that supports the protocol.

File based protocol

File based protocols use media containers to move data from one point to another. The two main methods within file based protocols are progressive delivery and segmented delivery. Progressive delivery has shorter start times and latencies, while segmented delivery has longer start times and latencies.

Progressive delivery

With progressive delivery a single file is either simulated or present and transmitted in one large chunk to the viewer. The advantage of this is that when downloaded it becomes a regular file, the disadvantage is that trick play (seeking, reverse playback, fast forward, etc.) is only possible if the player supports it for the container format being used.

Segmented delivery

With segmented delivery a single multiple smaller files are transmitted to the viewer in a smaller chunk each. The advantage of this is that trick play is possible even without direct support for the container and it is more suited to infinite duration streams, for example live streams.

Short summary per protocol

Streaming protocols

Flash: RTMP

Real-Time Messaging Protocol, also known as RTMP, is used to deliver Flash video to viewers. One of the true streaming protocols. Since Flash is on the decline it’s less used as delivery method to viewers and more as a streaming ingest for streaming platforms.

RTSP

Real Time Streaming Protocol, also known as RTSP. The oldest streaming protocol, in its prime it was the most used protocol for media delivery. Nowadays it sees most of its use in IoT devices such as cameras and has uses for LAN broadcasts. Internally it uses RTP for delivery. Because of its excellent latency and compatibility with practically all codecs it is still a popular choice.

WebRTC

Web Real-Time Communications, also known as WebRTC. This is a set of browser APIs that make them compatible with secure RTP (SRTP). As such it has all the properties of RTSP with the added bonus of browser compatibility. Because it is still relatively new it hasn’t seen much use yet, but there is a good chance that this will be the most prevalent protocol in the near future.

Progressive file delivery

Flash: Progressive FLV

Flash Video format, also known as FLV. It used to be the most common delivery format for web-based streaming, back when Flash was the only way to get a playing video in a browser. With the standardization of HTML5 and the decline of Flash installations in browsers it is seeing decreasing use.

Progressive MP4

MPEG-4, also more commonly known as MP4. Most modern devices and browsers support MP4 files, which makes it an excellent choice as protocol. The particularities of the format make it relatively high overhead and complicated for live streams, but the excellent compatibility still makes it a popular choice.

MPEG-Transport Stream (MPEG-TS)

Also known as MPEG-TS. It is the standard used for Digital Video Broadcast (DVB), the old-and-proven TV standard. Because it was made for broadcast it is extremely robust and can handle even high levels of packet loss. The downside to this is almost 20% of overhead.

Ogg

Ogg was one of the first open source and patent unencumbered container formats. Due to the open nature and free usage it has seen some popularity, but mostly as an addition to existing solutions because the compatibility is not wide enough to warrant it being used as the only delivery method.

Matroska

Also known as MKV. It is a later open source container format that enjoys widespread adoption, mainly because of its excellent compatibility with almost all codecs and subtitle formats in existence. Because of the wide compatibility in codecs it is very unpredictable whether it will play in-browser however.

WebM

WebM is a subset of Matroska, simplified for use on the web. Made to solve the browser compatibility issues of Matroska, it plays in almost all modern browsers. The downside is that the codecs are very restricted.

Segmented file delivery

HTTP Smooth Streaming (HSS)

Also known as HSS. Created by Microsoft as adaptive streaming solution for web browsers. The only downside is that it has no support outside of Windows systems, and is even dropped in their latest browser Edge in favor for HLS.

HTTP Live Streaming (HLS)

Also known as HLS. This uses segmented TS files internally and is the streaming protocol developed for iOS devices. Due to the many iOS devices it sees widespread use. The biggest downsides are the high overhead and high latency.

Flash: HTTP Dynamic Streaming (HDS)

HTTP Dynamic Streaming, also known as HDS. This uses segmented F4S (FLV-based) files internally and is the last Flash protocol. It was created as a response to HLS, but never saw widespread use.

Dynamic Adaptive Streaming over HTTP (MPEG-DASH)

More commonly known as MPEG-DASH. It was meant to unify the splintered segmented streaming ecosystem under DASH, instead it standardized all of the existing protocols under a single name. Their current focus is on reducing complexity and latency.

Which protocol should you pick?

If available and an option you should always pick a stream based protocol as these will give you the best results. Sadly, most devices/browsers do not support most stream based protocols. Every device has its own preferred delivery format, which complicates matters.

Some protocols look like they’re supported by every device, so why not just stick to those? Well, every protocol also comes with their own advantages and disadvantages. Below is a table that attempts to clear up the differences:

Protocol Type Platforms Trick play Latency Overhead Video codecs* Audio codecs* Status
RTMP Streaming Flash Yes Low Low H264 AAC, MP3 Legacy
RTSP Streaming Android, native players Yes Low Low Practically all Practically all Legacy
WebRTC Streaming Browsers Yes Low Low H264, VP8, VP9 Opus, AAC, MP3 Active development
FLV Progressive file Flash No Low Medium H264 AAC, MP3 Legacy
MP4 Progressive file Browsers, native players No Low High H264, HEVC, VP8, VP9 AAC, MP3 Maintained
MPEG-TS Progressive file TV, native players No Low Very high H264, HEVC AAC, MP3, Opus Maintained
Ogg Progressive file Browsers, native players No Low Medium Theora Opus, Vorbis Maintained
Matroska Progressive file Native players No Low Low Practically all Practically all Maintained
WebM Progressive file Browsers, native players No Low Low VP8, VP9 Opus, Vorbis Active development
HSS Segmented file Scripted players, Silverlight Yes High High H264, HEVC AAC, MP3 Legacy
HLS Segmented file iOS, Safari, Android, scripted players Yes Very high Very high H264, HEVC AAC, MP3 Active development
HDS Segmented file Flash Yes High Medium H264 AAC, MP3 Legacy
DASH Segmented file Scripted players Yes High Varies Varies Varies Active development

*Only codecs still in common use today are listed.

In the end you can pick your preferred protocol based on the features and support, but to truly reach everyone you will most likely have to settle for a combination of protocols.

Relating all of the above to MistServer, you can see that we are still missing some of the protocols mentioned above. We are adding support for those in the near future. For the next blog post Erik will write about AV1, the upcoming (open) video codec.

read moreless
17 May 2017

[Release] Stable release 2.11 now available!

Hello everyone! Stable release 2.11 of MistServer is now available! The full change log is available here and downloads are here. Here are some highlights: Pro feature:Access log added Access log information of viewers connecting to MistServer is now logged. Pro feature: New UDP-based...

Hello everyone! Stable release 2.11 of MistServer is now available! The full change log is available here and downloads are here. Here are some highlights:

  • Pro feature:Access log added Access log information of viewers connecting to MistServer is now logged.
  • Pro feature: New UDP-based API added
  • Pro feature: Session tagging You can now freely add tags to sessions and execute actions on sessions with a specific tag.
  • Pro feature: HLS file and URL (pull) input You can now use .m3u8 file structures or HTTP urls as input for MistServer.
  • Pro feature: .wav output support is added
  • Pro feature: PCM A-law codec support for RTSP and WAV
  • Pro feature: Opus audio codec support for RTSP
  • Feature: Opus audio codec support for Ogg
  • Feature: Password no longer required when logging into the interface using localhost.
  • Pro Improvement: Prometheus settings can now be changed during runtime
  • Pro Improvement: Updater no longer blocks API access while running, updates can now be performed as rolling update without disconnecting users
  • Pro Improvement: RTMP push output now compatible with Facebook and Youtube
  • Improvement: Console output is now colour-coded
  • Improvement: Local API access no longer requires authorization
  • Improvement: Overhaul on all analysers, now all standardized in usage.
  • Improvement: API changed to always return minimized-style output
  • Improvement: Backported many previous Pro-only API calls to OS edition see manual for details
  • Bugfix: ".html" access to streams now works correctly when used behind a proxy
read moreless
17 May 2017

[Blog] Connecting to our API with PHP

Hello there! Carina here. I reckon that those of you who are using MistServer have, perhaps without realising it, frequently interacted with its API: the MistServer Management Interface (or MI for short) is a javascript based webpage that uses the API...

Hello there! Carina here.

I reckon that those of you who are using MistServer have, perhaps without realising it, frequently interacted with its API: the MistServer Management Interface (or MI for short) is a javascript based webpage that uses the API to communicate with Mist. Through the API, it requests information from MistServer and saves any configuration edits.

For most users, the MI is all they'll need to configure MistServer to suit their needs. However, for some, it's not suitable. For example: suppose you'd want a group of users to be able to configure only a stream that belongs to them. MistServer can facilitate multiple users, but it doesn't support a permission system of the sort directly. Instead, PHP could be used to translate user inputs to API calls.

In this blog post, I'll explain what such a PHP implementation might look like. All the PHP code used for this example will be provided. I won't get too technical in the blog post itself, because I reckon that those who want all the details can read the PHP code, and those who can't, needn't be bothered with the exact reasons for everything. Additional information about how to use MistServer's API can be found in the documentation.

Resources

Communicating with MistServer

The first step is to set up a way to send HTTP POST requests to MistServer, with a JSON payload. I'll call this mistserver.php. I've used CURL to handle the HTTP request. MistServer will respond with a JSON encoded reply, which is translated to an associative array.

If MistServer is hosted on the same machine that will be running the php, authorization isn't required (new in 2.11). Otherwise, MistServer will reply telling us to authenticate using a special string. If this is the case, the password will be encrypted together with the authentication string, and sent back to MistServer along with the username. Once MistServer reports that the status is OK, the authentication part is stripped from MistServer's reply, and returned.

By default, we'll be using minimal mode, which means we'll be using less bandwidth, but mostly that MistServer's response will be less bulky and thus more readable.

Actually requesting things!

I've purposely left out showing any errors in mistserver.php, so that you guys can just copy the file and use it in your projects. We should probably still tell users when something happens, though! I've made a few examples of usage here, that does actually output something readable in a separate file, index.php.

MistServer never replies with an error key directly in its main object, so I've used that to report CURL or API errors.

I've created a new function, getData(), that adds error printing around the main communication function. It returns false if there was an error and the data array if there wasn't.

Reading

We're all set! Let's actually start requesting some information. How about we check which protocols are enabled?

getCurrentProtocols() calls getData() and asks MistServer to respond with the config object. We check if the communication was successful, and if the config key exists. Then, we loop over the configured protocols and print the connector name. That's it!

Sent: Received:
Array(
  "config" => true
)
Array(
  "config" => Array(
    "protocols" => Array(
      0 => Array(
        "connector" => "HTTP",
        "online" => 1
      ),
      [..]
    ),
    [..]
  )
)

Another example. Let's read the logs, and if there are any errors in it, print the most recent one. Note that MistServer only provides the last 100 log entries through the API, so there might not be any.

getLastErrorLog() also calls getData(), but this time requests the log array. If we get it back, we reverse the array to get the newest entries first, and then start looping over them, until we find an error. If we do, we print it.

Sent: Received:
Array(
  "log" => true
)
Array(
  "log" => Array(
    0 => 1494493673,
    1 => "CONF",
    2 => "Controller started"
  ),
  [..]
)

As you can see, using MistServer's API is actually quite straightforward. It's time we up the bar (slightly, don't worry) and try changing some of MistServer's configuration.

Writing

How about adding a stream? For this we'll use the addstream command. This command is available in the Open Source version from 2.11; it was already available in Pro versions.

Two parameters need to be included: the stream name and source. There can be more options, depending on what kind of source it is. Note that if addstream is used, and a stream with that name already exists, it will be overwritten.

The addStream($name,$options) function calls getData() with these values, checks if the stream exists in the reply, and returns it.

Sent: Received:
Array(
  "addstream" => Array(
    "example" => Array(
      "source" => "/path/to/file.flv"
    )
  )
)
Array(
  "streams" => Array(
    "example" => Array(
      "name" => "example",
      "source" => "/path/to/file.flv"
    ),
    "incomplete list" => 1
  )
)

Alright, great. Now, let's remove the stream again with the deletestream command. This command is available in the Open Source version from 2.11 as well.

The deleteStream($name) function calls getData(), and checks if the stream has indeed been removed.

Sent: Received:
Array(
  "deletestream" => Array(
    0 => "example"
  )
)
Array(
  "streams" => Array(
    "incomplete list" => 1
  )
)

And there you have it. The API truly isn't that complicated. So get on with it and integrate MistServer into your project!

Next time, you can look forward to Balder, who will be talking about the different streaming protocols.

read moreless
1 May 2017

[Blog] Deep-dive: the triggers system

Hello streaming media enthusiasts! We're back from the NAB show and have slept off our jetlag. But that's enough about that - a post about our triggers system was promised, and here it is. Why triggers? We wanted to add a method...

Hello streaming media enthusiasts! We're back from the NAB show and have slept off our jetlag. But that's enough about that - a post about our triggers system was promised, and here it is.

Why triggers?

We wanted to add a method to influence the behavior of MistServer on a low level, without being overly complicated. To do so, we came up with the triggers system. This system is meant to allow you to intercept certain events happening inside the server, and change the outcome depending on some decision logic. The "thing" with the decision logic is called the trigger handler, as it effectively "handles" the trigger event. Further requirements to the trigger system were no ties to any specific programming language, and the ability to accept handlers both remote and local. After all, at MistServer we are all about openness and making sure integrations are as friction-less as possible.

The types of events you can intercept were purposefully made very wide: all the way from high-level events such as server boot and server shutdown to low-level events such as connections being opened or closed, and everything in between. Particularly popular is the USER_NEW trigger, which allows you to accept or deny views on a per-session level, and as such as very suitable for access control systems.

How?

After a lot of deliberation and tests, we finally settled on a standard input / output system as the lowest common denominator that all scripting and programming languages would support. As input, a trigger handler will receive a newline-separated list of parameters that contain information about the event being triggered. As output, the system expect either a simple boolean true/false or a replacement value to be used (where applicable, e.g. when rewriting URLs).

Two implementations were made: a simple executable style, and an HTTP style. In the executable style, the trigger type is sent as the only argument to an executable which is ran, the trigger input is piped into its standard input, and the result is read from standard output. In the HTTP style, the trigger type is sent as an HTTP header, the URL is requested, and the trigger input is sent as POST body while the result is read from the response body. These implementations allowed us to keep nearly identical internal handling mechanics, ensuring the behavior would be consistent regardless of trigger handler type used.

Further influencing behavior

While the triggers system allows for a lot of decision logic and possible use cases to be implemented, there are some things that are beyond the reach of a simple system like this. For example, you may want to have a system that runs a check if some connection is allowed to continue periodically, or when certain events happen in other parts of your business logic (e.g. payment received (or not), stream content/subject changing over time, etc).

For this purpose, after we released our triggers system, we've slowly been expanding it with API calls that supplement it. For example, the invalidate_sessions call will cause the USER_NEW trigger to be re-run for all already-accepted connections, allowing you to "change your mind" on these decisions, effectively halting playback at any desired point.

In the near future, we will also be releasing something we've been working on for a while now: our stream and session tagging systems. These will allow you to add "tags" to streams and sessions on the fly, and run certain triggers only when particular tags are present (or missing), plus the list of tags will be passed as one of the parameters to all applicable trigger handlers. This will add flexibility to the system for even more possibilities. Also coming soon is a new localhost-only UDP interface for the API, allowing you to simply blast JSON-format API calls to port 4242 to localhost over UDP, and they will be executed. This is a very low-cost entry point for the API, as UDP sockets can be created and destroyed on a whim and do practically no error checking.

Feedback

We'd love to hear from our users what things they would like to influence and use the triggers system and API calls for. What cool things are you planning (or wanting) to do? Let us know! Reach out to us using the contact form, or simply e-mail us at info@ddvtech.com.

That was it for this post! The next blog post will be Carina, showing how to effectively use our API from PHP. Until next time!

read moreless
18 Apr 2017

[Blog] Stream Latency

Hi readers, as promised by Balder, this blog post will be about latency. When streaming live footage, latency is the amount of time that passes between what happens on camera, and the time that it is shown on the stream...

Hi readers, as promised by Balder, this blog post will be about latency.

When streaming live footage, latency is the amount of time that passes between what happens on camera, and the time that it is shown on the stream where it is watched. And while there are cases where artificial latency is induced into the stream to allow for error correction and selecting the right camera to display at the correct time, in general you want your latency to be as small as possible. Apart from this artificial latency, I will cover some major causes of latency encountered when handling live streaming, and the available options for reduction of latency in these steps.

The three main categories where latency is introduced are the following:

Encoding latency

The encoding step is the first in the process when we follow our live footage from the camera towards the viewer. Due to the wide availability of playback capabilities, H.264 is the most common used codec to encode video for consumer-grade streams, and I will therefore mostly focus on this codec.

While encoders are becoming faster at a rapid pace, the basic settings for most of them are geared towards optimization for VoD assets. To reduce size on disk, and through this reduce the bandwidth needed to stream over a network, most encoders will generate an in-memory buffer of several packets before sending out any. The codec allows for referencing frames both before and after the current for its data, which allows for better compression, as when the internal buffer is large enough, the encoder can pick which frames to reference in order to obtain the smallest set of relative differences to obtain it. Turning off the option for these so-called bi-predictive frames, or B-frames as they are commonly called, decreases latency in exchange for a somewhat higher bandwidth requirement.

The next bottleneck that can be handled in the encoding step is the keyframe interval. When using a codec based on references between frames, sending a 'complete' set of data on a regular interval helps with decreasing the bandwidth necessary, and is therefore employed widely when switching between different camera's on live streams. It is easily overlooked however, that these keyframe intervals also affect the latency on a large scale, as new viewers can not start viewing the stream unless they have received such a full frame — they have no data to base the different references on before this keyframe. This either causes new viewers to have to wait for the stream to be viewable, or, more often, causes new viewers to be delayed by a couple of seconds, merely because this was the latest available keyframe at the time they start viewing.

Playback latency

The protocol used both to the server hosting the stream and from the server to the viewers has a large amount of control over the latency in the entire process. With many vendors switching towards segment based protocols in order to allow for using widely available caching techniques, the requirement to buffer an entire segment before being able to send it to the viewer is introduced. In order to evade bandwidth overhead, these segments are usually multiple seconds in length, but even when considering smaller segment sizes, the different buffering regulations for these protocols and the players capable of displaying them causes an indeterminate factor of latency in the entire process.

While the most effective method of decreasing the latency introduced here is to avoid the use of these protocols where possible, on some platforms using segmented protocols is the only option available. In these cases, setting the correct segment size along with tweaking the keyframe interval is the best method to reduce the latency as much as possible. This segment size is configurable through the API in MistServer; even mid-stream if required.

Processing latency

Any processing done on the machine serving the streams introduces latency as well, though often to increase the functionality of your stream. A transmuxing system, for example, processes the incoming streams into the various different protocols needed to support all viewers, and to this purpose must maintain an internal buffer of some size in order to facilitate this. Within MistServer, this buffer is configurable through the API.

On top of this, for various protocols, MistServer employs some tricks to keep the stream as live as possible. To do this we monitor the current state of each viewer, and skip ahead in the live stream when they are falling behind. This ensures that your viewers observe as little latency as possible, regardless of their available bandwidth.

In the near future, the next release of MistServer will contain a rework of the internal communication system, removing the need to wait between data becoming available on the server itself, and the data being available for transmuxing to the outputs, reducing the total server latency introduced even further.

Our next post will be by Jaron, providing a deep technical understanding of our trigger system and the underlying processes behind it.

— Erik

read moreless
18 Apr 2017

[News] MistServer team at NAB show from 20th to 29th.

Hello everyone! The majority of the MistServer team will attend the NAB show from April 20th to April 29th. During this time we will have limited availability, and replies might take slightly longer than usual. If you happen to be...

Hello everyone! The majority of the MistServer team will attend the NAB show from April 20th to April 29th. During this time we will have limited availability, and replies might take slightly longer than usual. If you happen to be in Las Vegas feel free to drop by our booth SU11704CM.

read moreless
3 Apr 2017

[Blog] Live streaming with MistServer and OBS Studio

Hello everyone! As previously described by Jaron this blog post will primarily be about the basics of live streaming and using OBS Studio specifically to do it. We have noticed that most beginners are confused by how to properly set...

Hello everyone! As previously described by Jaron this blog post will primarily be about the basics of live streaming and using OBS Studio specifically to do it. We have noticed that most beginners are confused by how to properly set up a live stream, as most questions we receive are questions on how to get their live stream working.

Basic Live streaming information

Most popular consumer streaming applications use RTMP to send data towards their broadcast target. The most confusing part for newer users is where to put which address, mostly because the same syntax is used for both publishing and broadcasting.

Standard RTMP url syntax
rtmp://*HOST*:*PORT*/*APPLICATION*/*STREAM_NAME*

Where:

  • *HOST* = The IP address or hostname of the server you are trying to reach
  • *PORT* = The port to be used; if left out it will use the default 1935 port.
  • *APPLICATION* = This is used to define which module should be used when connecting, within MistServer, this value will be ignored or used as password protection. The value must be provided, but may be empty.
  • *STREAM_NAME* = The stream name of the stream: used to match stream data to a stream id or name.

This might still be somewhat confusing, so I will make sure to give an example below.

  • Address of server running OBS: 192.168.137.19
  • Address of server running MistServer: 192.168.137.26
  • Port: Default 1935 used
  • Application: not used for mistserver, we use live to prevent unreadable URLs.
  • Stream name: livestream

MistServer settings

You can set the correct setting in MistServer when creating or editing a stream using the stream panel in the left menu.

  • Stream name: "livestream" no surprises here, both servers need to use the same stream name in order to make sure they are both connecting the stream data to the proper stream name.
  • Source: "push://192.168.137.19" MistServer needs to know what input will be used and where to expect it from. Using this source will tell MistServer to expect an incoming RTMP push from the ip 192.168.137.19. This will also make sure that only the address 192.168.137.19 is allowed to push this stream. You could also use "push://" without any address. That would allow any address to push this stream. Great for testing, but not recommended for production.

Image of the stream settings within MistServer

OBS Stream settings

You can find the OBS settings at the top menu under "File -> Settings". You will need the stream settings to set up the push towards MistServer.

  • Stream Type: Custom Streaming Server This is the category MistServer falls under.
  • URL: "rtmp://192.168.137.26/live/" Here we tell OBS to push the stream towards MistServer which can be found at 192.168.137.26. Note that this url includes the application name.
  • Stream key: "livestream" Here you will need to fill in the Stream id, which is the stream name we used in MistServer.

Image of the OBS Studio stream settings

OBS advanced settings

You can get to these settings by selecting advanced output mode at the output settings within OBS.

The basic stream settings will produce workable video in most cases, however there's two settings that can hugely impact your stream quality, latency and user experience.

Rate control: This setting dictates the bitrate or "quality" of the stream. This setting is mostly about the stream quality and user experience. Keyframe Interval: Video streams consist of full frames and data relative to the full frames, this setting decides how often a full frame appears. This heavily influences latency.

Optimizing for low latency

Low latency is mostly decided by the keyframe interval. As mentioned above video streams consist of full frames and data relative to these full frames. Keyframes are the only valid starting points for playback, which means that more keyframes allow for a lower latency. Having keyframes appear often will allow your viewers to hook onto a point closer to the actual live point, this does come with a downside however. A full frame has a higher bit cost, so you will need more bandwidth to generate a stream of the same quality compared to a lower amount of keyframes.

Optimizing for stream quality and user experience

Stream quality and user experience is mostly decided by the rate control of the stream. The rate control decides just how much bandwidth a stream is allowed to use. As you might guess a high amount of bandwidth usually means better quality. One thing to keep in mind however is that your output can never improve the quality of your stream beyond your input, so a low quality stream input will never improve in quality even if you allocate more bandwidth to it.

With user experience in this case I'm talking about the stability of the video playback and the smoothness of the video. This is mostly decided by the peak bitrate of the video. The peak bitrate of a video may raise if a lot of sudden changes happen within the video, when this is the case some viewers may run into trouble as the bandwidth required could go over their (allowed) connection limit. When this happens they will run into playback problems like buffering or skips in video playback. At the same time constant bitrate will remove peaks, but will also reduce the stream quality when a high bitrate was required to properly show the video.

Balancing low latency with stream quality and user experience

As you might've guessed this is where the real struggle is, low latency and stream quality tend to increase with higher bandwidth, while user experience increases with lower bandwidth. Luckily I can share two settings that tend to work great for us.

The constant bitrate profile

This is the profile we use when we want what we consider a "normal" stream. It uses a constant bit rate, but a good mix between quality and latency. It should suit most situations, but your mileage may vary. We use the following settings:

Rate control: CBR Bit rate: 2000 Keyframe interval: 5 No custom buffer size CPU usage preset: Very fast Profile: high Tune: none No additional x264 options

Image of the stream settings described above

Setting the rate control to CBR means constant bit rate. This means the stream will never go above the given bit rate (2000kpbs). The keyframe interval of 5 means that segmented protocols will not be able to make segments smaller than 5 seconds. HLS is the highest latency protocol, requiring 3 segments before playback is possible. That means a minimum latency of 15 seconds for HLS. CPU usage preset very fast means relatively less CPU time is spent on each frame, thus sacrificing quality in exchange for encoding speed. Profile high means newer (now commonplace) H264 optimizations will be allowed, providing a small increase in quality per bit. We didn't set any Tune or additional x264 options for this as these options should really only be used when you know what they do and how they work.

The constant quality profile

This is the profile we use when we want a stream keep a certain amount of quality and are less concerned about the bit rate. In practise we use this less, but I thought it handy to share nonetheless:

Rate control: CRF CRF: 25 Keyframe interval: 5 CPU usage preset: Very fast Profile: high Tune: none No additional x264 options

Image of the stream settings described above

Setting the rate control to CRF means constant rate factor. This setting goes from 0 to 51, where 0 is lossless and 51 is the worst quality possible. We tend to like it around 25, but the useful range is around 17 to 28. The other settings are discussed in the section above.

Optimal stream settings

Your optimal stream settings might not be the same as the profiles shared above, but we're hoping this gives you a good base to start experimentating from and gives you a bit of insight on what to change when you want more latency, quality or stability.

OBS Output settings

I will not go into much detail here. The standard OBS settings should cover most streaming use cases. The encoder option decides how the stream is encoded; hardware accelerated encoders give best performance. It is best to use anything other than x264 if available, but if you must use it because you have no other (hardware) option, the preset veryfast is advisable as it is less intensive on your PC. The best way to find out which settings are best for you, is to experiment with them a bit.

Image of the OBS Studio output settings

Start streaming

Now that the settings for MistServer and OBS are done, we are all good to go. To start streaming all we will have to do is press the Start Streaming button in the bottom right corner of OBS.

Image of OBS Studio actively pushing a stream towards MistServer

Now that we are pushing the stream you should see the status change within MistServer from Unavailable to Standby and then to Active. Unavailable means the source is offline, Standby means the source is active and playback might be possible already and Active means the source is active and playback is guaranteed on all supported outputs.

Image of the streams panel showing stream status as it updates

To see if the stream is working we can click Preview and get to the preview panel, if everything is setup correctly we will be seeing a stream appear soon enough.

Image of the preview panel within MistServer showing video playback

Getting your stream to your viewers

Now that we have verified the setup works we will want to make sure our viewers can watch as well. The easiest method is to use our provided embeddable code that will make your stream available on any webpage. You can find this under the Embed option at the top in the preview page, or to the right in the main streams panel.

At the embed page you can set up how the player should behave once it is loaded. The settings should be self-explanatory. Do note that the embed code options are not saved and will be reset once you leave the embed page. Under the embed options a list of supported protocols will be shown. This list is only available if the stream source is active, as it is based on the codecs of the incoming stream.

Image of the default embeddable code with used stream settings

All we have to do is change the embed code options to our liking and copy the Embed code to a webpage. I will be using the default options and have copied the result to a simple html file as shown below.

Image of the embed code added to a standard HTML page

After making the webpage available, we should be able to watch our stream without any problems, as long as your device has a browser.

Image of video playback on multiple devices

Well that is it for the basics on how to get a stream to work and reach your viewers using MistServer. Of course, getting the stream to work and setting the stream just right is not the same, but having playback definitely helps. Most notable is that the point of playback is not the same for every device, this changes because different protocols are used for different devices, inducing different delays. This brings us on our next topic: latency, which Erik will cover in the next post.

Edited on 2017-11-07: Added OBS advanced settings

read moreless
21 Mar 2017

[News] Non-commercial license now available!

Hello everyone, MistServer already had two licensing models available: the free open source edition and the paid enterprise (also known as "Pro") edition. Starting today, we're adding a third option: the non-commercial license. This new license is intended for non-commercial users, and can...

Hello everyone,

MistServer already had two licensing models available: the free open source edition and the paid enterprise (also known as "Pro") edition. Starting today, we're adding a third option: the non-commercial license.

This new license is intended for non-commercial users, and can be used by anyone that is not using MistServer (directly or indirectly) to generate revenue. It contains all the great features and extras that the enterprise edition has, but in exchange for the non-commercial use only limitation the price has been lowered significantly. This edition is available for only $ 9.99 USD per month per instance.

Not sure if you're allowed to use the non-commercial edition for your intended use? Just contact us and we'll answer any questions you may have.

read moreless
20 Mar 2017

[Release] Stable release 2.10.1 now available!

Hello everyone! Stable release 2.10.1 of MistServer is now available! The full change log is available here and downloads are here. Here is a short summary: The meta-player now detects offline streams coming online. It will automatically refresh and load the newly available...

Hello everyone! Stable release 2.10.1 of MistServer is now available! The full change log is available here and downloads are here. Here is a short summary:

  • The meta-player now detects offline streams coming online. It will automatically refresh and load the newly available stream, ideal for streams that are only online at specific times, for example.
  • Many small usability improvements to the meta-player. For example, the volume control is now more sensitive.
  • FLV VoD input is now significantly faster.
  • Full unicode support.
  • Pro Feature: Added LIVE_BANDWIDTH trigger. This trigger alerts you when a live stream goes over a specifically set bit rate limit, allowing you to react.
  • Pro Feature: RTSP input now supports receiving initialization data in-band. This means streams with invalid or incomplete SDP data can still be used.
  • Various other bugfixes and small improvements.
read moreless
15 Mar 2017

[Blog] Behind the scenes: MP4 live

Hello streaming media enthusiasts! It's Jaron again, back with my first proper blog post after the introduction I posted earlier this year. As mentioned by Carina in the previous post, I'll be explaining the background of MP4 live streaming in...

Hello streaming media enthusiasts! It's Jaron again, back with my first proper blog post after the introduction I posted earlier this year. As mentioned by Carina in the previous post, I'll be explaining the background of MP4 live streaming in this post.

What is MP4?

MP4 is short for MPEG-4 Part 14. It's a media container standard developed by the International Organization for Standardization, and is commonly recognized by most of the world as "a video file" these days. MP4 is based on Apple's QuickTime file format as published in 2001, and they are effectively (almost) the same thing. As a container, MP4 files can theoretically contain all kinds of data: audio, video, subtitles, metadata, etcetera.

MP4 has become the de-facto standard for video files these days. It uses a mandatory index, which is usually placed at the end of the file (since logically, only after writing the entire file the index can be generated).

A file where this index is moved to the beginning of the file - so it is available at the start of a download and playback can begin without receiving the entire file first - is referred to as a "fast start" MP4 file. Since MistServer generates all its outputs on the fly, our generated MP4 files are always such "fast start" files, even if the input file was not.

The impossible index

Such a mandatory index poses a challenge for live streams. After all: live streams have no beginning or end, and are theoretically infinite in duration. It is impossible to generate an index for an infinite duration stream, so the usual method of generating MP4 files is not applicable.

Luckily, the MP4 standard also contains a section on "fragmented" MP4. Intended for splitting MP4 data into multiple files on disk, it allows for smaller "sub-indexes" to be used for parts of a media stream.

MistServer leverages this fragmented MP4 support in the standard, and instead sends a single progressively downloaded file containing a stream of very small fragments and sub-indexes. Using this technique, it becomes possible to livestream media data in a standard MP4 container.

Why?

The big reason for wanting to do this is because practically all devices that are able to play videos will play them when provided in MP4 format. This goes for browsers, integrated media players, smart TVs - literally everything will play MP4. And since fragmented MP4 has been a part of the standard since the very beginning, these devices will play our live MP4 streams as well.

The really fun part is that when used in a browser, this method of playback requires no plugins, no scripts, no browser extensions. It will "just work", even when scripting is disabled by the user. That makes MP4 live the only playback method that can currently play a live stream when scripting is turned off in a browser. When used outside of a browser, all media players will accept the stream, without needing a specialized application.

Pitfalls

MP4 live is a relatively new and (until now) unused technique. As such, there are a few pitfalls to keep in mind. Particularly, Google Chrome has a small handful of bugs associated with this type of stream. MistServer does browser detection and inserts workarounds for these bugs directly into the bitstream, meaning that even the workarounds for Chrome compatibility do not require client-side scripting.

Now that most browsers are providing roughly equivalent theoretical compatibility, some have started to pretend to be Chrome in their communications in an effort to trigger the "more capable" version of websites to be served. This throws a wrench into our bug workaround efforts, as such browsers are wrongly detected to be Chrome when they are not. Applying our workaround to any other browser than Chrome causes playback to halt, so we must correctly detect these not-Chrome browsers as well, and disable the workaround accordingly. MistServer does all this, too.

Finally, iOS devices and all Apple software/hardware in general don't seem to like this format of stream delivery. This makes sense, since MP4 was based on an Apple format to begin with, and the original Apple format did not contain the fragmented type at all. It would seem that Apple kept their own implementation and did not follow the newer standard. While it is logical when looked at in that light, it is a bit ironic that the only devices that will not play MP4 live are devices made by the original author of the standard it is based on. Luckily, the point is a bit moot, as all those devices prefer HLS streams anyway, and MistServer provides that format as well.

Going forward

Naturally, we don't expect MP4 to stay the most common or best delivery method until the end of time. We're already working on newer standards that might take the place of MP4 in the future, and are planning to automatically select the best playback method for each device when such methods become better choices for them.

That was it for this post! You can look forward to Balder's post next time, where he will explain how to use OBS Studio with MistServer.

read moreless
1 Mar 2017

[Blog] The MistServer Meta-Player

Hello readers! I'm Carina, and I'm responsible for web development here at DDVTech/MistServer. Today, I'll be talking to you about our solution for viewing streams on a website: the meta-player. Why we've built our own playerOur meta-player has started its life as...
Hello readers!
I'm Carina, and I'm responsible for web development here at DDVTech/MistServer. Today, I'll be talking to you about our solution for viewing streams on a website: the meta-player.

Why we've built our own player

Our meta-player has started its life as part of the MistServer Management Interface - a simple script switching between a video tag or flash object - designed only to preview configured streams. It soon became clear that some of our clients wanted to use the player on their own website(s), and to accommodate them we added a copiable embed code to the Management Interface.
However, we felt that a player running in production had to comply with a higher standard than something that just enables our customers to check if a stream is working. It had to do more than just work: it had to keep on playing through stream hiccups while having a similar appearance regardless of playback mode. Thus it was decided to rework our meta-player into its current form, as it was released with MistServer 2.7 in autumn 2016.

What makes our player different

The new meta-player was designed as a shell around other, existing players. It loops over the players it has available, and asks which of the protocols MistServer is broadcasting the player can play in the current browser, if any. It then constructs the selected player, translating specified options (autoplay, which tracks to use, etc) into a format that that particular player understands. It also adds a control bar to provide a consistent interface.

But the meta-player's task is not done yet: it monitors if the stream is playing smoothly, and, if it isn't, it can ask the viewer if they want to reload or switch to another playback method.

Because the meta-player only has to support MistServer as its streaming source, it can integrate more closely with our meta information. It will detect tracks, subtitles and more without requiring additional configuration.

Usage and customisation

The easiest way to get started with our meta-player is through the MistServer Management Interface. Configure the stream you want to display, and visit the Preview tab to see it being played by the meta-player. Then click the 'Embed' button. Under the heading 'embed code' you'll find a box with html code to paste to your website where you want to display the video.
Underneath you'll find a bunch of options for basic configuration of your player. The integrated help can explain what the options do.

If you want to place your video dynamically through Javascript, just include
http:///player.js
on the webpage and call
mistPlay("",options);
, where
options
is an object containing additional configuration. It's easiest to create this through the Management Interface and copy it from the embed code box.
Not shown in the Management Interface is the callback option. If this is set to a function, the function will be executed after the player is built.
The
mistPlay
function will return a player object, which contains methods such as
play()
,
unload()
, and more.
Options can also be defined globally in the variable
mistoptions
.
It's possible to change the way the player looks through CSS and to do minor tweaks (to change player priority for example) with javascript.
An example implementation can be found here.

More advanced users can write their own javascript code to add more players to the meta-player, or we could do it for you. In either case, you'll probably want to contact us.

The quest to improve

Because of the huge range of browsers across different devices on the market nowadays, optimizing playback on all of them is quite a challenge. We will continue to tweak and improve the meta-player's performance in a never ending effort as the browsers in use are sure to continue evolving.
On top of that, there is always room for improvement. For instance, from the next release (2.10) on, the sound volume control on the control bar is changed from a linear range to a quadratic one. This may seem trivial, but it enables users to more accurately control their volume if it is relatively low: the difference between muted and the minimum volume is much smaller.
If there is a feature you think should be added to our meta-player, please let us know.

For our next blog post you can look forward to Jaron, who will talk about MP4 live streaming.
read moreless
21 Feb 2017

[Blog] Stream Splicing

Hello readers, Erik here! As mentioned in the opening blog post, I will mostly post about innovations. Today I kick off with some work in progress on a feature that we call stream splicing. What is stream splicing? Stream splicing works by manipulating...
Hello readers,

Erik here! As mentioned in the opening blog post, I will mostly post about innovations. Today I kick off with some work in progress on a feature that we call stream splicing.

What is stream splicing?

Stream splicing works by manipulating media data just before it leaves the media server. For example, this allows you to switch media sources or insert generated content while maintaining bitstream and container compliance, thus allowing advanced features without requiring player awareness or being limited to specific protocols.

What can I do with stream splicing?

The real power of this technology is that it allows you to adapt basically any stream on a per-viewer basis. While this gives us many feasible use cases, I will handle three of these in today's post.

  • Adaptive bitrate switching

    I will start out with a re-imagining of a widely used technology which allows for quality selection in your streams by your viewers. The established solution works by generating a Manifest, which is generally a playlist of various qualities of a single stream. This playlist is requested by a video player, and allows the player to select the best quality based on factors like display resolution and available bandwidth. Using a separate playlist for each quality, the player will request the actual video data. If the actual data segments are keyframe-aligned between the various qualities, the player is able to switch to a lower or higher quality on segment boundaries.

    While being a proven solution which allows you to reach a wider audience, an individual user may still opt to select a higher quality than his available bandwidth allows him to view. The technology is also restricted to segmented protocols, reducing the usability for applications with requirements on low latency. On top of this, players will probably request a segment of a higher quality to see whether it is received fast enough in order to switch to it, effectively wasting bandwidth as a single segment is requested twice but only played once.

    Using stream splicing to achieve the same effect on the server side allows you a more fine-grained control over what your viewer gets to watch. Looking at the actual state of the server machine, which blocks on the data connection if the viewer is not reading data fast enough, a connection can be forced to a lower quality regardless of what the player requests. By not relying on client-awareness of quality switching, a more low-level sync can be achieved while providing compatibility with both progressive download and low-latency streaming protocols. Where bandwidth is limited or costly viewers can also be forced to a lower quality stream as demand increases. Doing this allows for more clients to connect while you’re working on scaling up.

  • Stream personalisation

    In applications everywhere, individualisation is gaining terrain over a one-fits-all mindset. On large scale platforms it is becoming a requirement and your customers expect to see this option. However, using existing technologies for streaming often makes it really cumbersome to give your viewers a default language in either audio or subtitles.

    Our flexible triggers API already allows you to take specific actions based on events within the media server. Combining this with server side manipulation of your stream data, you can automatically select and influence the default language a single user will receive based on, for example, their user profile on your platform. The addition of extra triggers in our API will not only make it easier to implement these kind of options on your platform, it will allow for full integration of the stream splicing feature into your existing system.

  • Advertisement injection

    Probably one of the most important elements of any streaming platform is monetization. While there are multiple solutions readily available to serve advertisements to your viewers, innovations like ad blockers on the client side prevent you from reaching every viewer, cutting into your profits. While fully encoding the desired advertisement into your stream ensures your viewers will see the advertisement, it does not allow the same flexibility given by client-side advertisements. In this scenario every viewer gets the same final stream.

    The latest Video Ad Serving Template (VAST) specification adds support for server side advertisement stitching and tracking, allowing the dynamic insertion of advertisements into a video stream. Combining this with splicing allows for individualized advertisements to be inserted without needing a custom player and effectively combats ad-blockers.

    While not covering the entire range of possibilities, the examples above should give some insight into what you will be able to do with this technology. If you have a specific use case that's not covered here and you want to know whether we can help you achieve it, feel free to contact us to discuss in more detail.


When will it be ready?

As already mentioned this is currently a work in progress. We are about to start with testing it in the field, and are open to applications of interested parties. Assuming the field tests are successful, a release containing stream splicing will follow shortly after.

Feel free to contact us with any questions regarding the progress or field tests of this new feature at info@mistserver.org.

Our next blog post will be by Carina, explaining more about our recently released meta-player.

- Erik
read moreless
30 Jan 2017

[Blog] Why use a media server, anyway?

Hello readers, As mentioned before by Jaron, I will kick off the first post. I will be giving a short and simple overview on why you would want a media server instead of other solutions like a regular web server. I’ll...
Hello readers,

As mentioned before by Jaron, I will kick off the first post. I will be giving a short and simple overview on why you would want a media server instead of other solutions like a regular web server. I’ll feature what I think are the four biggest reasons to use a media server in favor of the alternatives.

1. Superior handling of media files means guaranteed delivery

As a media server specializes in media, it should come as no surprise that it will have superior handling of media files when compared to other solutions. A true media server can take your source file and make sure it gets to your viewer regardless of the chosen delivery method. The most important feature of any solution is to make sure your media works in any situation on any device.

Alternatives often cannot handle this transmuxing and must stick to the type of media prepared in advance, forcing your users to use a certain player, plugin or delivery method. Even worse is that when that method isn't supported by their device/system, they will not be able to watch at all! Especially hosting a live stream can become a chore if you don't have a media server, as most consumer-grade live streaming is based on RTMP, which is disappearing in favor of HTML5-capable methods.

A media server alleviates this problem by transmuxing the stream, making sure it can be viewed by anyone at any time, whether it is a live or on demand stream.

2. Deeper analysis is possible through media servers

Because media servers can handle media so much better they also allow deeper insight in how your media is consumed. You'll be able to see just what part of your media files or live streams is most popular, how many viewers you've had and through what protocol or device they've connected.

If you are not using a media server you will most likely be stuck with just a viewer count and are not sure if they watched the entire thing or just a part of it. Furthermore, the viewer count may well be off by a significant amount when using segmented delivery methods, as it becomes hard to track single views when these delivery methods are used.

Using the deeper knowledge you gain about your users allows you to decide where your focus for media should be and how to keep your viewers interested, or what area you could try to grow next.

3. Keep on growing

Probably the most forgotten feature of media servers is that you will be able to keep growing, taking advantage of the newest developments and innovations in streaming media at practically no extra cost. Media servers specialize in developing and implementing features that improve your media delivery, making sure you will always be caught up with the latest innovations.

Implementing these new features, protocols or outputs yourself is a full-time job. Streaming media developments happen so quickly that settling for a system that isn’t keeping up with developments will most likely render your entire media chain outdated or unusable within just a few years.

As an example: using flash players was considered a good, stable and proven solution just a few years ago. These days, the Flash plugin is blocked automatically by the majority of consumer browsers.

4. User-oriented, content-aware connection handling

Using a media server allows you to customize every user's experience individually. This means that you are able to change the experience of a single viewer without affecting the experience of others.

Web servers usually send the entire file at the speed they have available, meaning that even if your viewers do not watch the entire video they will still have received the largest part of it. This means bandwidth is wasted, and bandwidth costs are often the highest costs you have when running a media service.

Media servers are able to send media in a real-time sense, which means viewers will have downloaded just a bit more than they have watched. This can easily save you hundreds of GBs of bandwidth. This also means that when your platform grows, the savings of using a media server grow proportionally.

Besides saving bandwidth this user oriented connection handling also allows for several other tricks, which Erik will highlight in the next blog post.
read moreless
16 Jan 2017

[Blog] Introducing our new bi-monthly blog posts

Hello streaming enthusiasts around the world, Happy 2017! A new year is a time for new beginnings, and here at DDVTech we have decided to start the new year off with a bi-monthly blog. From now on, twice per month you will be...

Hello streaming enthusiasts around the world,

Happy 2017! A new year is a time for new beginnings, and here at DDVTech we have decided to start the new year off with a bi-monthly blog. From now on, twice per month you will be able to read posts from the MistServer team right here, covering subjects that we didn't cover before. We plan to switch authors every post, with each author covering a different type of subject matter. The coming posts you can expect these authors:

  • Balder: Streaming how-to's and ideas
  • Erik: Innovation and various other subjects
  • Carina: Streaming on the web
  • Jaron (myself): Technical deep-dives and behind the scenes details on our team's progress

Later this month you can expect a post from Balder, and from then on we'll randomly pick an author for each post. We hope you're looking forward to reading our rambles!

— Jaron

read moreless
10 Jan 2017

[Release] Stable release 2.9.0 now available!

Hello everyone! Stable release 2.9.0 of MistServer is now available! The full changelog is available here and downloads are here. Here is a short summary: Added VideoJS player to our smart meta-player. Adding VideoJS allows HLS playback to work in most...

Hello everyone! Stable release 2.9.0 of MistServer is now available! The full changelog is available here and downloads are here. Here is a short summary:



Added VideoJS player to our smart meta-player. Adding VideoJS allows HLS playback to work in most non-Apple browsers in addition to Apple browsers.



The MistController can now restart without closing connections and restart faster



Pro Feature: Added RECORDING_END trigger. This trigger alerts you when a file finishes writing to disk. Allowing you to do your own post-processing for example.



Pro Feature: MP4 live improvements. MP4 live is now compatible with FireFox, no longer randomly disconnects mid-stream and the end-to-end latency has been reduced to 5-8 seconds for most streams.



Various other bugfixes and small improvements.

read moreless
Latest 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012