News

17 May 2017

[Blog] Connecting to our API with PHP

Hello there! Carina here. I reckon that those of you who are using MistServer have, perhaps without realising it, frequently interacted with its API: the MistServer Management Interface (or MI for short) is a javascript based webpage that uses the API...

Hello there! Carina here.

I reckon that those of you who are using MistServer have, perhaps without realising it, frequently interacted with its API: the MistServer Management Interface (or MI for short) is a javascript based webpage that uses the API to communicate with Mist. Through the API, it requests information from MistServer and saves any configuration edits.

For most users, the MI is all they'll need to configure MistServer to suit their needs. However, for some, it's not suitable. For example: suppose you'd want a group of users to be able to configure only a stream that belongs to them. MistServer can facilitate multiple users, but it doesn't support a permission system of the sort directly. Instead, PHP could be used to translate user inputs to API calls.

In this blog post, I'll explain what such a PHP implementation might look like. All the PHP code used for this example will be provided. I won't get too technical in the blog post itself, because I reckon that those who want all the details can read the PHP code, and those who can't, needn't be bothered with the exact reasons for everything. Additional information about how to use MistServer's API can be found in the documentation.

Resources

Communicating with MistServer

The first step is to set up a way to send HTTP POST requests to MistServer, with a JSON payload. I'll call this mistserver.php. I've used CURL to handle the HTTP request. MistServer will respond with a JSON encoded reply, which is translated to an associative array.

If MistServer is hosted on the same machine that will be running the php, authorization isn't required (new in 2.11). Otherwise, MistServer will reply telling us to authenticate using a special string. If this is the case, the password will be encrypted together with the authentication string, and sent back to MistServer along with the username. Once MistServer reports that the status is OK, the authentication part is stripped from MistServer's reply, and returned.

By default, we'll be using minimal mode, which means we'll be using less bandwidth, but mostly that MistServer's response will be less bulky and thus more readable.

Actually requesting things!

I've purposely left out showing any errors in mistserver.php, so that you guys can just copy the file and use it in your projects. We should probably still tell users when something happens, though! I've made a few examples of usage here, that does actually output something readable in a separate file, index.php.

MistServer never replies with an error key directly in its main object, so I've used that to report CURL or API errors.

I've created a new function, getData(), that adds error printing around the main communication function. It returns false if there was an error and the data array if there wasn't.

Reading

We're all set! Let's actually start requesting some information. How about we check which protocols are enabled?

getCurrentProtocols() calls getData() and asks MistServer to respond with the config object. We check if the communication was successful, and if the config key exists. Then, we loop over the configured protocols and print the connector name. That's it!

Sent: Received:
Array(
  "config" => true
)
Array(
  "config" => Array(
    "protocols" => Array(
      0 => Array(
        "connector" => "HTTP",
        "online" => 1
      ),
      [..]
    ),
    [..]
  )
)

Another example. Let's read the logs, and if there are any errors in it, print the most recent one. Note that MistServer only provides the last 100 log entries through the API, so there might not be any.

getLastErrorLog() also calls getData(), but this time requests the log array. If we get it back, we reverse the array to get the newest entries first, and then start looping over them, until we find an error. If we do, we print it.

Sent: Received:
Array(
  "log" => true
)
Array(
  "log" => Array(
    0 => 1494493673,
    1 => "CONF",
    2 => "Controller started"
  ),
  [..]
)

As you can see, using MistServer's API is actually quite straightforward. It's time we up the bar (slightly, don't worry) and try changing some of MistServer's configuration.

Writing

How about adding a stream? For this we'll use the addstream command. This command is available in the Open Source version from 2.11; it was already available in Pro versions.

Two parameters need to be included: the stream name and source. There can be more options, depending on what kind of source it is. Note that if addstream is used, and a stream with that name already exists, it will be overwritten.

The addStream($name,$options) function calls getData() with these values, checks if the stream exists in the reply, and returns it.

Sent: Received:
Array(
  "addstream" => Array(
    "example" => Array(
      "source" => "/path/to/file.flv"
    )
  )
)
Array(
  "streams" => Array(
    "example" => Array(
      "name" => "example",
      "source" => "/path/to/file.flv"
    ),
    "incomplete list" => 1
  )
)

Alright, great. Now, let's remove the stream again with the deletestream command. This command is available in the Open Source version from 2.11 as well.

The deleteStream($name) function calls getData(), and checks if the stream has indeed been removed.

Sent: Received:
Array(
  "deletestream" => Array(
    0 => "example"
  )
)
Array(
  "streams" => Array(
    "incomplete list" => 1
  )
)

And there you have it. The API truly isn't that complicated. So get on with it and integrate MistServer into your project!

Next time, you can look forward to Balder, who will be talking about the different streaming protocols.

read moreless
17 May 2017

[Release] Stable release 2.11 now available!

Hello everyone! Stable release 2.11 of MistServer is now available! The full change log is available here and downloads are here. Here are some highlights: Pro feature:Access log added Access log information of viewers connecting to MistServer is now logged. Pro feature: New UDP-based...

Hello everyone! Stable release 2.11 of MistServer is now available! The full change log is available here and downloads are here. Here are some highlights:

  • Pro feature:Access log added Access log information of viewers connecting to MistServer is now logged.
  • Pro feature: New UDP-based API added
  • Pro feature: Session tagging You can now freely add tags to sessions and execute actions on sessions with a specific tag.
  • Pro feature: HLS file and URL (pull) input You can now use .m3u8 file structures or HTTP urls as input for MistServer.
  • Pro feature: .wav output support is added
  • Pro feature: PCM A-law codec support for RTSP and WAV
  • Pro feature: Opus audio codec support for RTSP
  • Feature: Opus audio codec support for Ogg
  • Feature: Password no longer required when logging into the interface using localhost.
  • Pro Improvement: Prometheus settings can now be changed during runtime
  • Pro Improvement: Updater no longer blocks API access while running, updates can now be performed as rolling update without disconnecting users
  • Pro Improvement: RTMP push output now compatible with Facebook and Youtube
  • Improvement: Console output is now colour-coded
  • Improvement: Local API access no longer requires authorization
  • Improvement: Overhaul on all analysers, now all standardized in usage.
  • Improvement: API changed to always return minimized-style output
  • Improvement: Backported many previous Pro-only API calls to OS edition see manual for details
  • Bugfix: ".html" access to streams now works correctly when used behind a proxy
read moreless
1 May 2017

[Blog] Deep-dive: the triggers system

Hello streaming media enthusiasts! We're back from the NAB show and have slept off our jetlag. But that's enough about that - a post about our triggers system was promised, and here it is. Why triggers? We wanted to add a method...

Hello streaming media enthusiasts! We're back from the NAB show and have slept off our jetlag. But that's enough about that - a post about our triggers system was promised, and here it is.

Why triggers?

We wanted to add a method to influence the behavior of MistServer on a low level, without being overly complicated. To do so, we came up with the triggers system. This system is meant to allow you to intercept certain events happening inside the server, and change the outcome depending on some decision logic. The "thing" with the decision logic is called the trigger handler, as it effectively "handles" the trigger event. Further requirements to the trigger system were no ties to any specific programming language, and the ability to accept handlers both remote and local. After all, at MistServer we are all about openness and making sure integrations are as friction-less as possible.

The types of events you can intercept were purposefully made very wide: all the way from high-level events such as server boot and server shutdown to low-level events such as connections being opened or closed, and everything in between. Particularly popular is the USER_NEW trigger, which allows you to accept or deny views on a per-session level, and as such as very suitable for access control systems.

How?

After a lot of deliberation and tests, we finally settled on a standard input / output system as the lowest common denominator that all scripting and programming languages would support. As input, a trigger handler will receive a newline-separated list of parameters that contain information about the event being triggered. As output, the system expect either a simple boolean true/false or a replacement value to be used (where applicable, e.g. when rewriting URLs).

Two implementations were made: a simple executable style, and an HTTP style. In the executable style, the trigger type is sent as the only argument to an executable which is ran, the trigger input is piped into its standard input, and the result is read from standard output. In the HTTP style, the trigger type is sent as an HTTP header, the URL is requested, and the trigger input is sent as POST body while the result is read from the response body. These implementations allowed us to keep nearly identical internal handling mechanics, ensuring the behavior would be consistent regardless of trigger handler type used.

Further influencing behavior

While the triggers system allows for a lot of decision logic and possible use cases to be implemented, there are some things that are beyond the reach of a simple system like this. For example, you may want to have a system that runs a check if some connection is allowed to continue periodically, or when certain events happen in other parts of your business logic (e.g. payment received (or not), stream content/subject changing over time, etc).

For this purpose, after we released our triggers system, we've slowly been expanding it with API calls that supplement it. For example, the invalidate_sessions call will cause the USER_NEW trigger to be re-run for all already-accepted connections, allowing you to "change your mind" on these decisions, effectively halting playback at any desired point.

In the near future, we will also be releasing something we've been working on for a while now: our stream and session tagging systems. These will allow you to add "tags" to streams and sessions on the fly, and run certain triggers only when particular tags are present (or missing), plus the list of tags will be passed as one of the parameters to all applicable trigger handlers. This will add flexibility to the system for even more possibilities. Also coming soon is a new localhost-only UDP interface for the API, allowing you to simply blast JSON-format API calls to port 4242 to localhost over UDP, and they will be executed. This is a very low-cost entry point for the API, as UDP sockets can be created and destroyed on a whim and do practically no error checking.

Feedback

We'd love to hear from our users what things they would like to influence and use the triggers system and API calls for. What cool things are you planning (or wanting) to do? Let us know! Reach out to us using the contact form, or simply e-mail us at info@ddvtech.com.

That was it for this post! The next blog post will be Carina, showing how to effectively use our API from PHP. Until next time!

read moreless
18 Apr 2017

[Blog] Stream Latency

Hi readers, as promised by Balder, this blog post will be about latency. When streaming live footage, latency is the amount of time that passes between what happens on camera, and the time that it is shown on the stream...

Hi readers, as promised by Balder, this blog post will be about latency.

When streaming live footage, latency is the amount of time that passes between what happens on camera, and the time that it is shown on the stream where it is watched. And while there are cases where artificial latency is induced into the stream to allow for error correction and selecting the right camera to display at the correct time, in general you want your latency to be as small as possible. Apart from this artificial latency, I will cover some major causes of latency encountered when handling live streaming, and the available options for reduction of latency in these steps.

The three main categories where latency is introduced are the following:

Encoding latency

The encoding step is the first in the process when we follow our live footage from the camera towards the viewer. Due to the wide availability of playback capabilities, H.264 is the most common used codec to encode video for consumer-grade streams, and I will therefore mostly focus on this codec.

While encoders are becoming faster at a rapid pace, the basic settings for most of them are geared towards optimization for VoD assets. To reduce size on disk, and through this reduce the bandwidth needed to stream over a network, most encoders will generate an in-memory buffer of several packets before sending out any. The codec allows for referencing frames both before and after the current for its data, which allows for better compression, as when the internal buffer is large enough, the encoder can pick which frames to reference in order to obtain the smallest set of relative differences to obtain it. Turning off the option for these so-called bi-predictive frames, or B-frames as they are commonly called, decreases latency in exchange for a somewhat higher bandwidth requirement.

The next bottleneck that can be handled in the encoding step is the keyframe interval. When using a codec based on references between frames, sending a 'complete' set of data on a regular interval helps with decreasing the bandwidth necessary, and is therefore employed widely when switching between different camera's on live streams. It is easily overlooked however, that these keyframe intervals also affect the latency on a large scale, as new viewers can not start viewing the stream unless they have received such a full frame — they have no data to base the different references on before this keyframe. This either causes new viewers to have to wait for the stream to be viewable, or, more often, causes new viewers to be delayed by a couple of seconds, merely because this was the latest available keyframe at the time they start viewing.

Playback latency

The protocol used both to the server hosting the stream and from the server to the viewers has a large amount of control over the latency in the entire process. With many vendors switching towards segment based protocols in order to allow for using widely available caching techniques, the requirement to buffer an entire segment before being able to send it to the viewer is introduced. In order to evade bandwidth overhead, these segments are usually multiple seconds in length, but even when considering smaller segment sizes, the different buffering regulations for these protocols and the players capable of displaying them causes an indeterminate factor of latency in the entire process.

While the most effective method of decreasing the latency introduced here is to avoid the use of these protocols where possible, on some platforms using segmented protocols is the only option available. In these cases, setting the correct segment size along with tweaking the keyframe interval is the best method to reduce the latency as much as possible. This segment size is configurable through the API in MistServer; even mid-stream if required.

Processing latency

Any processing done on the machine serving the streams introduces latency as well, though often to increase the functionality of your stream. A transmuxing system, for example, processes the incoming streams into the various different protocols needed to support all viewers, and to this purpose must maintain an internal buffer of some size in order to facilitate this. Within MistServer, this buffer is configurable through the API.

On top of this, for various protocols, MistServer employs some tricks to keep the stream as live as possible. To do this we monitor the current state of each viewer, and skip ahead in the live stream when they are falling behind. This ensures that your viewers observe as little latency as possible, regardless of their available bandwidth.

In the near future, the next release of MistServer will contain a rework of the internal communication system, removing the need to wait between data becoming available on the server itself, and the data being available for transmuxing to the outputs, reducing the total server latency introduced even further.

Our next post will be by Jaron, providing a deep technical understanding of our trigger system and the underlying processes behind it.

— Erik

read moreless
18 Apr 2017

[News] MistServer team at NAB show from 20th to 29th.

Hello everyone! The majority of the MistServer team will attend the NAB show from April 20th to April 29th. During this time we will have limited availability, and replies might take slightly longer than usual. If you happen to be...

Hello everyone! The majority of the MistServer team will attend the NAB show from April 20th to April 29th. During this time we will have limited availability, and replies might take slightly longer than usual. If you happen to be in Las Vegas feel free to drop by our booth SU11704CM.

read moreless
Latest 2017 2016 2015 2014 2013 2012