News

18 Jul 2017

[Blog] Load balancing especially for media servers

Hello streaming media enthusiasts! Jaron here, with a new background article. In this blog post, I'll detail the why and how of our load balancing solution, which is currently being beta-tested in some of our clients' production deployments. Quick primer on load...

Hello streaming media enthusiasts! Jaron here, with a new background article. In this blog post, I'll detail the why and how of our load balancing solution, which is currently being beta-tested in some of our clients' production deployments.

Quick primer on load balancing and how it relates to media

The concept of load balancing is not new; not by a long shot. It is probably one of the most researched and experimented-upon topics in computing. So, obviously, the load balancing problem has already been thoroughly solved. There are many different techniques and algorithms, both generic and specific ones.

Media is a very particular type of load, however: it highly benefits from clustering. This means that if you have a media stream, it's beneficial to serve all users for this stream from the same server. Serving the same stream from multiple servers increases the load much more than serving multiple users from a single server. Of course that changes when there are so many users connected that a single server cannot handle the load anymore, as you will be forced to spread those users over multiple servers. Even then, you will want to keep the amount of servers per stream the lowest possible, while spreading the users more or less equally over those servers.

Why is this important?

Most traditional load balancing methods will either spread randomly, evenly, or to whatever server is least loaded. This works, of course. However, it suffers greatly when there is a sudden surge of new viewers coming in. These viewers will either all be sent to the same server, or spread over all servers unnecessarily. The result of this is sub-optimal use of the available servers... and that means higher costs for a lesser experience for your end-users.

Load balancing, MistServer-style

MistServer's load balancing technique is media-aware. The load balancer maintains constant contact with all servers, receiving information on bandwidth use, CPU use, memory use, active streams, viewer counts per stream and bit rates per stream. It uses these numbers to preemptively cluster incoming requests on servers while making predictions on bandwidth use after these users will connect. This last trick in particular allows MistServer to handle surges of new users correctly without overloading any single server. Meanwhile, it constantly adjusts its predictions with new data received from the servers, accounting for dropped users and changes in bandwidth patterns over time.

A little extra

Our load balancer also doubles as an information source for the servers: they can make queries as to what server to best pull a feed from for the lowest latency and highest efficiency. As an extra trick, the load balancer (just like MistServer itself) is fully capable of making any changes to its configuration without needing to be restarted, allowing you to add and remove servers from the list dynamically without affecting your users. Last but not least, the load balancer also provides merged statistics on viewer counts and server health for use in your own monitoring and usage tracking systems.

Want to try it?

As mentioned in the introduction, our load balancer is still in beta. It's fully stable and usable; we just want to collect a little bit more data to further improve its behavior before we release it to everyone. If you are interested in helping us make these final improvements and/or in testing cool new technology, contact us and let us know!

Wrapping up

That was it for this post. Next time Balder will be back with a new subject!

read moreless
3 Jul 2017

[Blog] Building an access control system with triggers and PHP

Hello everyone, Carina here. In this blog I will discuss how PHP can be used to inform MistServer that a certain user has access to streams through the use of triggers. The Pro version is required for this. The process Before I...

Hello everyone, Carina here. In this blog I will discuss how PHP can be used to inform MistServer that a certain user has access to streams through the use of triggers. The Pro version is required for this.

The process

Before I go into the implementation details, let's take a look at what the user validation process is going to be like.

  1. The process starts when a user opens a PHP page, which we shall call the video page
  2. The website knows who this user is through a login system, and uses their userid and their ip to generate a hash, which is added to video urls on the video page
  3. The video is requested from MistServer, which fires the USER_NEW trigger
  4. The trigger requests another PHP page, the validation page, which checks the hash and returns whether or not this user is allowed to watch streams
  5. If the user has permission to watch, MistServer sends the video data

Alright, on to implementation.

The video page

Please see the sample code below.

First, the user id should be retrieved from an existing login system. In the sample code, it is set to 0, or the value provided through the user parameter. Then, the user id and ip are hashed, so that the validation script can check if the user id hasn't been tampered with. Both the user id and the hash will be appended to the video url, so that MistServer can pass them along to the validation script later.
Next, the video is embedded into the page.

<?PHP

  //some general configuration
  //where MistServer's HTTP output can be found
  $mist_http_host = "http://<your website>:8080";
  //the name of the stream that should be shown
  $stream = "<stream name>";

  //set the userid
  //in a 'real' situation, this would be provided through a login system
  if (isset($_REQUEST["user"])) { $user_id = intval($_REQUEST["user"]); }
  else { $user_id = 0; }

  //create a hash containing the remote IP and the user id, that the user id can be validated against
  $hash = md5($_SERVER["REMOTE_ADDR"].$user_id."something very very secret");

  //prepare a string that can pass the user id and validation hash to the trigger
  $urlappend = "?user=".$user_id."&hash=".$hash;

  //print the embed code with $urlappend where appropriate
?>
<div class="mistvideo" id="tmp_H0FVLdwHeg4j">
  <noscript>
    <video controls autoplay loop>
      <source
        src="<?PHP echo $mist_http_host."/hls/".$stream."/index.m3u8".$urlappend; ?>"
        type="application/vnd.apple.mpegurl">
      </source>
      <source
        src="<?PHP echo $mist_http_host."/".$stream.".mp4".$urlappend; ?>"
        type="video/mp4">
      </source>
    </video>
  </noscript>
  <script>
    var a = function(){
      mistPlay("<?PHP echo $stream; ?>",{
        target: document.getElementById("tmp_H0FVLdwHeg4j"),
        loop: true,
        urlappend: "<?PHP echo $urlappend; ?>"
      });
    };
    if (!window.mistplayers) {
      var p = document.createElement("script");
      p.src = "<?PHP echo $mist_http_host; ?>/player.js"
      document.head.appendChild(p);
      p.onload = a;
    }
    else { a(); }
  </script>
</div>

Trigger configuration

Sample configuration of the USER_NEW trigger

In the MistServer Management Interface, a new trigger must be added, using the following settings:

  • Trigger on: USER_NEW
  • Applies to: Check the streams for which user validation will be required, or don't check any streams to require validation for all streams
  • Handler: The url to your validation script
  • Blocking: Check.
    This means that MistServer will use the scripts output to decide whether or not to send data to the user.
  • Default response: 1 or 0.
    If for some reason the script can't be executed, this value will be used by MistServer. When this is set to 1, everyone will be able to watch the stream in case of a script error. When set to 0, no one will.

The validation page

Please see the sample code below.
It starts off by defining what the script should return in case something unexpected happens. Here I've chosen for this to be 0 (don't play the video), as this probably occurs when someone is trying to tamper with the system.
Next, some functions are defined that will make the rest of the script more readable.

The first actual action is to check whether the script was called by one of the supported triggers. The trigger type is sent by MistServer in the X_TRIGGER request header.

Next, the payload that MistServer has sent in the request body is retrieved. The payload contains variables like the stream name, ip, protocol, etc. These will be used shortly, starting with the protocol.
When the protocol is HTTP or HTTPS, the user is probably trying to request javascript or CSS files that are required for the meta player. There is no reason to deny someone access to these, and thus the script informs MistServer it is allowed.

If the protocol is something else, the user id and hash are retrieved from the request url, which is passed to the validation script in the payload.
The hash is compared to what the hash should be, which is recalculated with the provided payload variables. If the user id has been tampered with or if the request is from another ip, this check should fail and MistServer is told not to send video data.

Otherwise, the now validated user id can be used to evaluate if this user is allowed to watch streams.

<?PHP
  //what the trigger should return in case of errors
  $defaultresponse = 0;

  ///\function get_payload
  /// the trigger request contains a payload in the post body, with various information separated by newlines
  /// this function returns the payload variables in an array
  function get_payload() {

    //translation array for the USER_NEW (and CONN_OPEN, CONN_CLOSE, CONN_PLAY) trigger
    $types = Array("stream","ip","connection_id","protocol","request_url","session_id");

    //retrieve the post body
    $post_body = file_get_contents("php://input");
    //convert to an array
    $post_body = explode("\n",$post_body);

    //combine the keys and values, and return
    return array_combine(array_slice($types,0,count($post_body)),$post_body);
  }

  ///\function no_ffffs
  /// removes ::ffff: from the beginning of an IPv6 that is actually an IPv6, so that it gives the same result
  function no_ffffs($str) {
    if (substr($str,0,7) == "::ffff:") {
      $str = substr($str,7);
    }
    return $str;
  }

  ///\function user_can_watch
  /// check whether a user can watch streams
  ///\TODO replace this with something sensible ;)
  function user_can_watch ($userid) {
    $can_watch = Array(1,2,3,6);
    if (in_array($userid,$can_watch)) { return 1; }
    return 0;
  }


  //as we're counting on the payload to contain certain values, this code doesn't work with other triggers (and it wouldn't make sense either)
  if (!in_array($_SERVER["HTTP_X_TRIGGER"],Array("USER_NEW","CONN_OPEN","CONN_CLOSE","CONN_PLAY"))) {
    error_log("This script is not compatible with triggers other than USER_NEW, CONN_OPEN, CONN_CLOSE and CONN_PLAY");
    die($defaultresponse);
  }




  $payload = get_payload();

  //always allow HTTP(S) requests
  if (($payload["protocol"] == "HTTP") || ($payload["protocol"] == "HTTPS")) { echo 1; }
  else {

    //error handling
    if (!isset($payload["request_url"])) {
      error_log("Payload did not include request_url.");
      die($defaultresponse);
    }

    //retrieve the request parameters
    $request_url_params = explode("?",$payload["request_url"]);
    parse_str($request_url_params[1],$request_url_params);

    //more error handling
    if (!isset($payload["ip"])) {
      error_log("Payload did not include ip.");
      die($defaultresponse);
    }
    if (!isset($request_url_params["hash"])) {
      error_log("Request_url parameters did not include hash.");
      die($defaultresponse);
    }
    if (!isset($request_url_params["user"])) {
      error_log("Request_url parameters did not include user.");
      die($defaultresponse);
    }

    //validate the hash/ip/userid combo
    if ($request_url_params["hash"] != md5(no_ffffs($payload["ip"]).$request_url_params["user"]."something very very secret")) {
      echo 0;
    }
    else {

      //the userid is valid, let's check if this user is allowed to watch the stream
      echo user_can_watch($request_url_params["user"]);

    }
  }

That's all folks: the validation process has been implemented.

Further information

The example described above is written to allow a user access to any streams that are served through MistServer. If access should be limited to certain streams, there are two ways to achieve this.
The simplest option is to configure the USER_NEW trigger in MistServer to only apply to the restricted streams. With this method it is not possible to differentiate between users (UserA can watch stream1 and 3, UserB can watch stream2).
If differentiation is required, the user_can_watch function in the validation script should be modified to take into account the stream that is being requested (which is included in the trigger payload).

The USER_NEW trigger is only fired when a user is not yet cached by MistServer. If this is not desired, for example when the validation script also has a statistical purpose, the CONN_PLAY trigger can be used instead. It should be noted however, that for segmented protocols such as HLS, CONN_PLAY will usually fire for every segment.

More information about the trigger system can be found in chapter 4.4 of the manual.
If any further questions remain, feel free to contact us.

Next time, Jaron will be discussing our load balancer.

read moreless
26 Jun 2017

[Blog] AV1

Hello, this is Erik with a post about the up and coming AV1 video codec. While none of the elements that will eventually be accepted into the final design are currently fixed (and the specification of the codec might not...

Hello, this is Erik with a post about the up and coming AV1 video codec. While none of the elements that will eventually be accepted into the final design are currently fixed (and the specification of the codec might not even be finalized this year), more and more companies have started working on support already. This post is meant to give a more in-depth view of what is currently happening, and how everything will most likely eventually fall into place.

NetVC

Internet Video Codec (NetVC) is a working group within the Internet Engineering Task Force (IETF). The goal of the group is to develop and monitor the requirements for the standardization of a new royalty-free video codec.

The current version of their draft specifies a variety of requirements of the next generation royalty-free codec. These range from low-latency real-time collaboration to 4k IPTV, and from adaptive bit rate to the existence of a real-time software encoder.

While these requirements are not bound to a specific codec, it determines what would be needed for a codec to be deemed acceptable. Techniques from the major contenders have been taken into account whilst setting up this list in 2015, with the Alliance for Open Media (AOMedia) forming shortly after to develop the AV1 codec to comply to these requirements in a joint effort between the Daala, Thor and VP10 teams.

AV1

The AOMedia has been formed to streamline development of a new open codec, as multiple groups were working simultaneously on different new codecs. AV1 is the first codec to be released by the alliance. With many of the influential groups within the internet video area participating in its development, it will be set-up to compete with HEVC in regard to compression and visual quality while remaining completely royalty free.

Because steering completely clear of the use of patents when designing a codec is a daunting task, the AOMedia group has decided to provide a license to use their IP in any project using AV1, as long as the user does not sue for patent infringement. This, in combination with a thorough Intellectual Property Rights review, means that AV1 will be free of royalties. This should give it a big edge over the main competitor HEVC, for which there are multiple patent pools one needs to pay license fees to, with an unknown amount of patents not contained in these pools.

Decided is to take the development of the VP10 codec as developed by Google as the base for AV1. In addition to this, AV1 will contain techniques from both Daala (by Xiph and Mozilla) and Thor (by Cisco).

The AV1 codec is developed to be used in combination with the opus audio codec, and wrapped in WebM for HTML5 media and WebRTC for real time uses. As most browsers already support both the WebM format and opus codec, this immediately generates a large potential user base.

Experiments

Development of AV1 is based mainly around the efforts of Google's VP10. Built upon VP9, VP10 was geared towards better compression while optimizing visual qualities in the encoded bitstream. With the formation of AOMedia, Google decided to drop VP10 development, and instead focus on solely the development of AV1.

Building upon a VP9 core, contributors can submit so called experiments to the repository, which can then be enabled and tested by the community. Based on whether the experiment is considered worthwhile, it enters IPR review, and after a successful pass there, it will be enabled by default and added to the specification. Most experiments have come from the respective developments from VP10, Daala and Thor, but other ideas are welcomed as well. As of yet, no experiments have been finalized.

Performance

Multiple tests have been run to compare AV1 to both H.264 and HEVC, with varying results over time. However, with no final selection of experiments, these performance measures have been made not only with different settings, but with completely different experiments enabled.

While it is good to see how the codec is developing over time, a real comparison between AV1 and any other codec can not be reliably made until the bitstream is frozen.

What's next?

With the end of the year targeted for finalizing the bitstream, the amount of people interested in the codec will probably only grow. With the variety of companies that AOMedia consists of, the general assumption is that the adoption of the codec and hardware support for encoding/decoding will be made available in a matter of months after the bitstream is finalized, rather than years.

While VP10 has been completely replaced by the AV1 codec, the same does not seem to hold for Thor and Daala. Both projects still see respective development, which does not seem limited to the features that will eventually be incorporated into AV1.

And that concludes it for now. Our next post will be by Carina, who will show how to create an access control system in PHP with our triggers.

read moreless
1 Jun 2017

[Blog] Fantastic protocols and where to stream them

Hello everyone, Balder here. As mentioned by Carina I will talk about streaming protocols and their usage in streaming media. As the list can get quite extensive I will only focus on protocols in common use that handle both video...

Hello everyone, Balder here. As mentioned by Carina I will talk about streaming protocols and their usage in streaming media. As the list can get quite extensive I will only focus on protocols in common use that handle both video and audio data.

Types of protocols

To keep it simple, a protocol is the method used to send/receive media data. You can divide the protocol in two types: - Stream based protocol - File based protocol

Stream based protocol

Stream based protocols are true streaming protocols. It has two-way communication between the server and the client and maintain an open connection. This allows for faster, more efficient delivery and lower latency. These properties make it the best option, if available. On the downside you need a player that supports the protocol.

File based protocol

File based protocols use media containers to move data from one point to another. The two main methods within file based protocols are progressive delivery and segmented delivery. Progressive delivery has shorter start times and latencies, while segmented delivery has longer start times and latencies.

Progressive delivery

With progressive delivery a single file is either simulated or present and transmitted in one large chunk to the viewer. The advantage of this is that when downloaded it becomes a regular file, the disadvantage is that trick play (seeking, reverse playback, fast forward, etc.) is only possible if the player supports it for the container format being used.

Segmented delivery

With segmented delivery a single multiple smaller files are transmitted to the viewer in a smaller chunk each. The advantage of this is that trick play is possible even without direct support for the container and it is more suited to infinite duration streams, for example live streams.

Short summary per protocol

Streaming protocols

Flash: RTMP

Real-Time Messaging Protocol, also known as RTMP, is used to deliver Flash video to viewers. One of the true streaming protocols. Since Flash is on the decline it’s less used as delivery method to viewers and more as a streaming ingest for streaming platforms.

RTSP

Real Time Streaming Protocol, also known as RTSP. The oldest streaming protocol, in its prime it was the most used protocol for media delivery. Nowadays it sees most of its use in IoT devices such as cameras and has uses for LAN broadcasts. Internally it uses RTP for delivery. Because of its excellent latency and compatibility with practically all codecs it is still a popular choice.

WebRTC

Web Real-Time Communications, also known as WebRTC. This is a set of browser APIs that make them compatible with secure RTP (SRTP). As such it has all the properties of RTSP with the added bonus of browser compatibility. Because it is still relatively new it hasn’t seen much use yet, but there is a good chance that this will be the most prevalent protocol in the near future.

Progressive file delivery

Flash: Progressive FLV

Flash Video format, also known as FLV. It used to be the most common delivery format for web-based streaming, back when Flash was the only way to get a playing video in a browser. With the standardization of HTML5 and the decline of Flash installations in browsers it is seeing decreasing use.

Progressive MP4

MPEG-4, also more commonly known as MP4. Most modern devices and browsers support MP4 files, which makes it an excellent choice as protocol. The particularities of the format make it relatively high overhead and complicated for live streams, but the excellent compatibility still makes it a popular choice.

MPEG-Transport Stream (MPEG-TS)

Also known as MPEG-TS. It is the standard used for Digital Video Broadcast (DVB), the old-and-proven TV standard. Because it was made for broadcast it is extremely robust and can handle even high levels of packet loss. The downside to this is almost 20% of overhead.

Ogg

Ogg was one of the first open source and patent unencumbered container formats. Due to the open nature and free usage it has seen some popularity, but mostly as an addition to existing solutions because the compatibility is not wide enough to warrant it being used as the only delivery method.

Matroska

Also known as MKV. It is a later open source container format that enjoys widespread adoption, mainly because of its excellent compatibility with almost all codecs and subtitle formats in existence. Because of the wide compatibility in codecs it is very unpredictable whether it will play in-browser however.

WebM

WebM is a subset of Matroska, simplified for use on the web. Made to solve the browser compatibility issues of Matroska, it plays in almost all modern browsers. The downside is that the codecs are very restricted.

Segmented file delivery

HTTP Smooth Streaming (HSS)

Also known as HSS. Created by Microsoft as adaptive streaming solution for web browsers. The only downside is that it has no support outside of Windows systems, and is even dropped in their latest browser Edge in favor for HLS.

HTTP Live Streaming (HLS)

Also known as HLS. This uses segmented TS files internally and is the streaming protocol developed for iOS devices. Due to the many iOS devices it sees widespread use. The biggest downsides are the high overhead and high latency.

Flash: HTTP Dynamic Streaming (HDS)

HTTP Dynamic Streaming, also known as HDS. This uses segmented F4S (FLV-based) files internally and is the last Flash protocol. It was created as a response to HLS, but never saw widespread use.

Dynamic Adaptive Streaming over HTTP (MPEG-DASH)

More commonly known as MPEG-DASH. It was meant to unify the splintered segmented streaming ecosystem under DASH, instead it standardized all of the existing protocols under a single name. Their current focus is on reducing complexity and latency.

Which protocol should you pick?

If available and an option you should always pick a stream based protocol as these will give you the best results. Sadly, most devices/browsers do not support most stream based protocols. Every device has its own preferred delivery format, which complicates matters.

Some protocols look like they’re supported by every device, so why not just stick to those? Well, every protocol also comes with their own advantages and disadvantages. Below is a table that attempts to clear up the differences:

Protocol Type Platforms Trick play Latency Overhead Video codecs* Audio codecs* Status
RTMP Streaming Flash Yes Low Low H264 AAC, MP3 Legacy
RTSP Streaming Android, native players Yes Low Low Practically all Practically all Legacy
WebRTC Streaming Browsers Yes Low Low H264, VP8, VP9 Opus, AAC, MP3 Active development
FLV Progressive file Flash No Low Medium H264 AAC, MP3 Legacy
MP4 Progressive file Browsers, native players No Low High H264, HEVC, VP8, VP9 AAC, MP3 Maintained
MPEG-TS Progressive file TV, native players No Low Very high H264, HEVC AAC, MP3, Opus Maintained
Ogg Progressive file Browsers, native players No Low Medium Theora Opus, Vorbis Maintained
Matroska Progressive file Native players No Low Low Practically all Practically all Maintained
WebM Progressive file Browsers, native players No Low Low VP8, VP9 Opus, Vorbis Active development
HSS Segmented file Scripted players, Silverlight Yes High High H264, HEVC AAC, MP3 Legacy
HLS Segmented file iOS, Safari, Android, scripted players Yes Very high Very high H264, HEVC AAC, MP3 Active development
HDS Segmented file Flash Yes High Medium H264 AAC, MP3 Legacy
DASH Segmented file Scripted players Yes High Varies Varies Varies Active development

*Only codecs still in common use today are listed.

In the end you can pick your preferred protocol based on the features and support, but to truly reach everyone you will most likely have to settle for a combination of protocols.

Relating all of the above to MistServer, you can see that we are still missing some of the protocols mentioned above. We are adding support for those in the near future. For the next blog post Erik will write about AV1, the upcoming (open) video codec.

read moreless
17 May 2017

[Release] Stable release 2.11 now available!

Hello everyone! Stable release 2.11 of MistServer is now available! The full change log is available here and downloads are here. Here are some highlights: Pro feature:Access log added Access log information of viewers connecting to MistServer is now logged. Pro feature: New UDP-based...

Hello everyone! Stable release 2.11 of MistServer is now available! The full change log is available here and downloads are here. Here are some highlights:

  • Pro feature:Access log added Access log information of viewers connecting to MistServer is now logged.
  • Pro feature: New UDP-based API added
  • Pro feature: Session tagging You can now freely add tags to sessions and execute actions on sessions with a specific tag.
  • Pro feature: HLS file and URL (pull) input You can now use .m3u8 file structures or HTTP urls as input for MistServer.
  • Pro feature: .wav output support is added
  • Pro feature: PCM A-law codec support for RTSP and WAV
  • Pro feature: Opus audio codec support for RTSP
  • Feature: Opus audio codec support for Ogg
  • Feature: Password no longer required when logging into the interface using localhost.
  • Pro Improvement: Prometheus settings can now be changed during runtime
  • Pro Improvement: Updater no longer blocks API access while running, updates can now be performed as rolling update without disconnecting users
  • Pro Improvement: RTMP push output now compatible with Facebook and Youtube
  • Improvement: Console output is now colour-coded
  • Improvement: Local API access no longer requires authorization
  • Improvement: Overhaul on all analysers, now all standardized in usage.
  • Improvement: API changed to always return minimized-style output
  • Improvement: Backported many previous Pro-only API calls to OS edition see manual for details
  • Bugfix: ".html" access to streams now works correctly when used behind a proxy
read moreless
Latest 2017 2016 2015 2014 2013 2012