News

28 Oct 2019

[Blog] Easy SSL for MistServer through Certbot (Linux-specific)

Hello everyone! This post we wanted to highlight a new feature in the latest MistServer builds (since version 2.17). This version not only added an integration with CertBot for Let’sEncrypt SSL certificates, but also added all SSL functionality to the...

Hello everyone! This post we wanted to highlight a new feature in the latest MistServer builds (since version 2.17). This version not only added an integration with CertBot for Let’sEncrypt SSL certificates, but also added all SSL functionality to the Open Source edition of MistServer.

Before we start: if you're using MistServer together with a webserver (as in running MistServer on the same server that is hosting your website) we recommend using a reverse proxy. It just makes more sense to have a single SSL certificate, and this will also allow you to run MistServer on the same port as your website which looks more professional. So, this guide is only useful for setups that run MistServer “standalone”, without a webserver on the same machine. That said, let's dig into it!

With version 2.17 of MistServer we added a new tool in your MistServer install called “MistUtilCertbot”. This tool takes care of Certbot integration, meaning the entire setup can now be done with just a single command! (After both MistServer and Certbot are installed first, of course.)

Install Certbot Certbot is a *Nix only tool that's meant for easy SSL certificate management. It's a package in most distributions of Linux, so we recommend using your distribution’s package manager to install it. More information on Certbot can be found here, and distribution-specific instructions can be found here.

Run MistUtilCertbot through Certbot Once installed you can have Certbot set up your MistServer HTTPS certificate by running the following command (run this command as the same user you would usually run certbot as; it does not matter what user MistServer is running as):

certbot certonly --manual --preferred-challenges=http --manual-auth-hook MistUtilCertbot --deploy-hook MistUtilCertbot -d DOMAIN01,DOMAIN02,ETC

You'll have to change the DOMAIN01,DOMAIN02,ETC part into your own domain(s), other than that there’s no need to make changes.

Set up auto-renewing of CertBot certificates This differs per distribution, so we recommend following the “Set up automatic renewal” step on certbot’s instructions page. There is no need to follow any of the other steps, as the rest is taken care of by our integration.

Done That's it! Your MistServer now has SSL enabled and it will be auto-renewing monthly!

read moreless
16 Oct 2019

[Blog] Transcript: Common OTT Problems and How to Fix Them

Hello everyone, This blog post covers the presentation our CTO Jaron gave during the IBC2019. The presentation was about common OTT problems and how to fix them, we're sure it's a great view if you've got some questions about what protocol,...

Hello everyone,

This blog post covers the presentation our CTO Jaron gave during the IBC2019. The presentation was about common OTT problems and how to fix them, we're sure it's a great view if you've got some questions about what protocol, codec or scaling solution you should pick. The slides are available here.

Transcript follows

Alright! Well hello everyone, as was mentioned my name is Jaron. There we go... Hi, my name is Jaron, the CTO of DDVTech. We build MistServer, so if you hear DDVTech or MistServer, one is the company and the other is the product. We're right over there. Well of course we fix all OTT problems, but you might not be using MistServer or not wanting to use MistServer so you might want to know how we solve these problems. Some of the common ones, at least.

So, I present: OTT de-Mist-ified! How you would solve these problems if you weren't using us. I'm first going to go through a little bit of OTT history and then dive into the common problems of protocol selection, codec selection, segmenting and scaling. And I'm going to try and not talk as fast because otherwise this is going to be over in a few minutes. I'm sure everyone would be happy with me not talking so fast.

Alright, history-wise. Well, first internet streaming was mostly audio-only because of bandwidth concerns. People were using modems over phone lines and "surprisingly" these tend to have enough bandwidth for about a conversation or so. It's like they were designed for it!

Then a little bit later, as bandwidth increased, people did that we now call VoD: pre-recorded videos. Back then it was just... they were uploaded to a website, you would download them, and play them in a media player. Because browsers playing video back then, well, it's just... yeah, that was it. That was not something you could take serious.

Shortly after, IP cameras appeared which used the real time streaming protocol (RTSP). Most still use RTSP to this day (even the very modern ones), because it's a very robust protocol. But, it doesn't really work on the rest of the Internet (in browsers); it just works for dedicated applications like that.

Then, Adobe came with their Flash Player and suddenly we had really cool vector animation online, and progressive streaming support. Later they also added RTMP (Real Time Media Protocol) which is still sort of the standard for user contribution.

Now we're arrived roughly today, and we have HTML5. That is proper media support on the Internet, finally! And, well, Flash went away in a flash (chuckle).

That's where we are now. What we call OTT: streaming over the internet instead of over traditional broadcast.

Let's go into the problems now. What should we do - today - to do this?

Protocol selection: protocol selection is mostly about device compatibility, because no matter where you want to stream there's some target device that you want to have working.

This could be people's cell phones, it could be set-top boxes, smart TVs... There's always something you want to make sure works, and you can't make everything work. Well, we do our best! But making everything work is infeasible, so you tend to focus on a few target groups of devices that you want to work. This decision basically boils down to, if you are targeting iOS you're using:

  • HLS or WebRTC for live, because those are the only two that Apple will allow for live on iOS.
  • For VoD you tend to do MP4 because that does work and it's a bit easier to work with. Though you could do VoD over HLS and WebRTC if you really wanted to.

For most other consumer devices it's a mixture of protocols that tend to work roughly equally well:

  • MP4 and WebM which are similar to file downloads, except they play directly in-browser.
  • The fancy new kid on the block, WebRTC, which does low latency specifically.
  • The segmented protocols, which are HLS, DASH and CMAF. They are roughly equivalent nowadays.

For special cases like IP cameras or dedicated devices like set top boxes, RTSP, RTMP and TS are still used in combination with the rest.

For most people it tends to come down to using HLS for live and MP4 for VoD because that's the combination that works on practically everything. It's not always ideal because HLS tends to have a lot of latency and not everything might work well with MP4 because it has a very large header so it can take a while to download and start. So, you might want to pick a different protocol instead.

So! That's how you should select your protocol, roughly of course. I'm going to do this pretty quickly because you can't go into all the problems in one go.

The next problem is codec selection. Now, codec selection is also largely about device compatibility because depending on what chipset is in a device you may or may not have hardware accelerated decoding support. Hardware accelerated decoding support is important, because if it's not there you can literally feel the phone burning up in your hand. It's trying to decode something that it doesn't know how to do. Without a chip it's doing it in software, software requires CPU, which requires power, which burns up your battery. So, if you're doing anything mobile you need to have hardware acceleration, and hardware acceleration (included on chips) tends to change over time.

Right now, H.264 is the most widely supported codec. It works on pretty much every consumer device that still works. Maybe some people have something really old from 15 years ago that's somehow still functional; then you might have a problem, but I don't think anyone expects modern video to play on devices like that anymore.

We also have HEVC (which is also known as H.265, it's two names for same codec). It's on newer devices. It works great, gives you better quality per byte of data. The annoying parts are:

  • Not all devices have it. Just the more modern ones.
  • The patent pool is a nightmare. H.264 also has a pattern pool, but it's pretty clear how you pay royalties. With HEVC no one even really knows who you should be paying and how much and so people tend to stay away from it, especially because not all devices are compatible.

For the future, I'm kind-of hoping that AV1 is going to be the next big thing. Because there's so many companies backing it; there's hardware vendors doing it. I'm guessing that within the next few years we will see support for AV1 on all modern devices. And, since there are theoretically no patent pools for AV1 it would also be free to use. I think if all devices start adding this and it's free to use, it would be pretty much a no-brainer to switch to AV1 in the future. So, prepare for AV1 and use H.264 today. If you really care about limiting bandwidth as much as you can while keeping high quality you might want to think about HEVC as well.

Alright, next problem: segmenting. This is specific to segmented protocols, so: HLS, DASH, CMAF. Also HSS, but that's not very popular anymore since it's mostly just CMAF with a different name. These protocols work with small segments of data, so basically smaller downloads of a couple seconds (sometimes milliseconds or minutes) of media data that are then played one after another. The idea is that you can keep the old methods of downloading files and sending files over regular plain old (web)servers without having to put anything specific for media in there. You can still do live streams, because you have tiny little pieces that you can keep adding to the stream and then removing the old ones.

The issue with segmenting is how long those segments are. Do you make them one second? Half a second? A minute? The things that are affected by segmenting are startup time, latency and stability. When it comes to startup time, smaller segments load faster, because it takes less time to download. That means that the startup time is reduced. So, if you want low startup time make small segments. The same goes for latency: because there are smaller segments the buffer on the client side can be smaller and then latency is lower. This is the technique that everyone is using to do low latency HLS over the last few years. There's new stuff coming out soon, but this is the current technology. Basically they make the segments really, really, small and the latency is low. They call it all kinds of fancy names but that's what they're doing underneath.

The big downside to small segments is stability, because longer segments are much more stable and have less overhead. So you're wasting more bandwidth by doing small segments, and you're decreasing your stability by doing small segments. If there's something wrong with the connection and even one segment is missing your stream starts buffering or stalling and nobody wants that. It's a constant battle between making them long or short and it depends on if you care more about the the latency and startup time or more about the stability. The annoying part is that most people want all three and you sadly can't have all three so the key thing is knowing your market, where your priorities are, and if you're doing something where latency is a big thing and startup time is a big thing.

For example if you're in the gambling industry or something you want to do small segments. Or, maybe not even go with segments at all but use a real real-time streaming protocol like WebRTC or RTSP. If you're more traditional and sending out content then it's a good idea to make longer segments. It will mean it will take slightly longer to start up, but it will play much more smoothly and it will use less bandwidth so it's all about knowing what you're doing and picking the configuration that goes with that.

All right the final problem is scaling. Scaling can mean two different things: it can either mean delivering at scale, so you have lots and lots of viewers or lots of lots of different broadcasts or even lots of both; or it can mean that you want to scale up and down as demand changes. Maybe as a platform that has a few people using it at night and then during day time the usage explodes and goes way up and then at night it goes way back down. It would be a waste to have servers running all night long not doing anything, so you kind of want to turn most of them off and then you want to put them back up in the morning. Something like that. There are several approaches to solving the scaling problem.

You can solve it by, for example, partially using cloud capacity. The capacity you would always need you would have in-house, and then on the peak times you put some extra cloud capacity in there. It tends to be more expensive but since it's really easy to turn it on and off people love adding that for their peak times.

You could use a CDN to offload the scaling problem. You create the content, you send it to the CDN, and now it's their problem. It's not really solving it, it's more moving it to someone else. Which is nice, because they'll solve it for you.

Peer to peer can be be a solution. A couple companies here to do that. By sending the data between all your different viewers you don't have to send it all from your own servers. You can save bandwidth and make scaling slightly easier. The problem is peer to peer only really works well if you have a lot of viewers watching the same moment in the same stream. So, for stuff like the Olympics or something this will work great, but if it's like, you know, two years ago this episode of some TV show... you're probably not going to have a good time doing this.

Of course there's the traditional adding and removing of servers. Which is a pretty obvious way to do things, but it's hard to do logistically, because you need to wait (lead time on physical servers) to do this.

Load balancing is required for most of these things, if you want to do them. Deciding where viewers are going to go. Are they going to go to a particular server, or not? You can sort-of move them away from servers you want to turn off, and then move them to the servers you want to keep on, and switch them in and out as needed this way.

There's always just buying more bandwidth for your existing servers, or having some kind of deal with your bandwidth provider that lets you use more or less depending on time of day. And combining any of these. There's no real easy answer to this, and it really depends a lot of how your business is structured.

Key to doing this (scaling) properly is having good statistics and metrics on your system. If you know what the usage is during a particular time of day, you can prepare for it. There tends to be a very clear pattern in these things. So if you know what's happening and how much load you're expecting for particular events or times of day, you can kind-of anticipate it, and make sure that these above things are done in time and not after the fact.

Alright! So back to us, MistServer. We provide ready-made solutions for all of these problems and many others. So you don't have to reinvent the wheel yourself, and you can sort-of offload some of the thinking and the troubleshooting to us. If you're facing some other problem we can probably help as well.

You can ask questions if you would like to, or you can drop by our booth right there, or shoot us an email if you're watching this online, or if you come up with a question later after the show.

read moreless
29 Oct 2018

[Blog] Transcript: Making sense out of the fragmented OTT delivery landscape.

Hello Everyone, Last month was IBC2018 and they've finally released the presentations. We thought it would be nice to add our CTO Jaron's presentation as he explains how to make sense of the fragmented OTT landscape. You can find the full presentation,...

Hello Everyone,

Last month was IBC2018 and they've finally released the presentations. We thought it would be nice to add our CTO Jaron's presentation as he explains how to make sense of the fragmented OTT landscape.

You can find the full presentation, slides and a transcript below.

slides

Transcript

SLIDE #1

Alright, hello everyone. Well as Ian just introduced I'm going to talk about the fragmented OTT delivery landscape.

SLIDE #2

Because, well, it is really fragmented. There are several types of streaming.

To begin we've got real time streaming, the streaming that everyone knows and loves, so to speak.

And we have pseudo streaming, which is like if you have an HTTP server and you pretend a file is there, but it's not really a file it's actually a live stream and you're just sending it pretending there’s a file.

But that wasn't enough, of course! Segmented streaming came afterwards - which is the current popular method, where you segment a stream in several parts and you have an index and then that index just updates with new parts as they become available.

Now it would be nice if it was this simple and it was just three methods, but unfortunately it is a little bit more complicated.

SLIDE #3

All of these methods have several protocols they can use to deliver. There's different protocols for real-time, pseudo and segmented and of course none of these are compatible with each other.

There are also players, besides all this. There is a ton of them, just in this hall alone there's at least seven or eight and they all say they do the same thing; and they do, and they all work fine. But how do you know what to pick? It's hard.

SLIDE #4

We should really stop making new standards.

SLIDE #5

So, real time streaming was the first one i mentioned. There's RTMP, which is the well known protocol that many systems still use as their ingest. But it's not used very often in delivery, as Flash is no longer supported in browsers. It's a very outdated protocol, it doesn't support HEVC, AV1, Opus; all the newer codecs aren't in there. But the protocol itself is highly supported by encoders and streaming services. It's something that’s hard to get rid of.

Then there's RTSP with RTP at the core, which is an actual standard unlike RTMP which is something Adobe just invented. RTSP supports a lot of things and it's very versatile. You can transport almost anything through it, but it's getting old. There's an RTSP version 2, which no one supports, but it exists. Version one is well supported but only in certain markets, like IP cameras. Most other things not as much and in browsers you can forget about it.

And then there's something newer for real-time streaming, which is WebRTC. WebRTC is the new cool kid on the block and it uses SRTP internally which is RTP with security added. Internally it's basically the same thing, but this works on browsers, which is nice as that means you can actually use it for most consumers unlike RTSP.

That gives you a bit of an overview of real time streaming. Besides these protocols you can also pick between TCP and UDP. TCP is what most internet connections use. It's a reliable method to send things, but because of it being reliable it's a bit slower and the latency is a bit higher. UDP is unreliable but has very low latency. Depending on what you're trying to do you might want to use one or the other.

All of these protocols work with TCP and/or UDP. RTMP is always TCP, RTSP can be either and WebRTC is always UDP. The spec of WebRTC says it can also use TCP, but I don't know a single browser that supports it so it's kind-of a moot point.

SLIDE #6

Then there's pseudo streaming which as i mentioned before uses fake files that are playing while they're downloading. They're infinite in length and duration, so you can't actually download them. Well, you can, but you end up with a huge file and you don't know exactly where the beginning and the end is, so it's not very nice.

While pseudo streaming is a pretty good idea in theory, there are some downsides. Besides disagreement on what format to pseudo-stream in, because there's lots of formats - like FLV, MP4, OGG, etcetera - and they all sort of work but none perfectly. The biggest downside is that you cannot cache these as they're infinite in length. So a proxy server will not store them and you cannot send them to a CDN, plus how do you decide where the beginning and end are? So pseudo streaming is nice method, but it doesn't work on a scalable system very well.

SLIDE #7

Now segmented streaming kind of solves that problem, because when you cut the stream into little files and have an index to say where those files are you can upload those little files to your CDN or cache them in a caching server and these files will not change. You just add more and remove others and the system works.

There are some disagreements here too between the different formats. Like what do we use to index? HLS uses text, but DASH uses XML. They contain the same information, but differ in the way of writing it. The container format for storing the segments themselves is also not clear: HLS uses TS and DASH uses MP4. Though they are kind of standardizing now to fMP4, but let's not go too deep here. The best practices and allowed combinations of what codecs work and which ones do not, do you align the keyframes or not - all of that differs between protocols as well. It’s hard to reach an agreement there too.

The biggest problem in segmented streaming is the high latency. Because many players want to buffer a couple of segments before playing, which means if your segments are several seconds long you will have a minimal latency of several times a couple of seconds. Which is not real-time in my understanding of the word “real-time”.

The compatibility with players and devices is also hard to follow. HLS works fine in iOS, but DASH does not unless it's fMP4 and you put a different index in front of it and it'll then only play on newer iOS models. It's hard to keep track of what will play where, that's also a kind of fragmentation you will need to solve for all of this.

SLIDE #8

So I kind of lied during the introduction when I had this slide up, there's even more fragmentation other than just these 3 types and their subtypes.

There's also encrypted streaming. When it comes to encrypted streaming there's Fairplay, Playready, Widevine and CENC which tries to combine them a little bit. But even in that they don't agree on what encryption scheme to use. So encryption is even fragmented into two different levels of fragmentation.

Then there are reliable transports now, which are getting some popularity. These are intended for between servers, because you generally don't do this to the end consumer. There are several options here too: some of these are companies/protocols that have been around for a while, some are relatively new, some are still in development, some are being standardized and some are not. That's also a type of fragmentation you may have to deal with if you do OTT streaming.

SLIDE #9

When it comes to encrypted streaming there is the common encryption standard, CENC. Common encryption, that is what it stands for, but it's not really common because it only standardizes the format and how to transport it. It standardizes on fMP4, it standardizes on where the encryption keys are, etc. But not what type of encryption to use. All encryption types use a block cipher, but some are counter based and others are not. So depending on what type of DRM you're using you might have to use one or the other. It's not really standardized, yet it is, so it's confusing on that level as well.

SLIDE #10

Then the reliable transports, they are intended for server to server. All of them use these techniques in some combination. Some add a little bit of extra fluff or remove some of it. But they all use these techniques at the core.

Forward error correction sends extra data with the stream that allows you to calculate the contents of data that is not arriving. This means not wasting any time asking for retransmits, since you can just recalculate what was missing so you don't have to ask the other server and have another round-trip in between.

Retransmits are sort of self-explanatory, where the receiving end says "hey i didn't receive package X can you retransmit it, send me another copy". This wastes time but eventually you do always get all the data so you can resolve the stream properly.

Bonding is something on a different level altogether where you connect multiple network interfaces like wireless network and GSM and you send data over both, hoping that with the combination of everything it will all end up arriving.

If you combine all three techniques of course you will get really good reception, at the cost of lots of overhead.

There's no standardization at all yet on reliable transports. It's very unclear what the advantages and disadvantages are of all these available ones. The ones listed in the previous slide all claim to be the best, to be perfect and to use a combination of the techniques. There's no real guide as to which you should be using.

So... lots of fragmentation in OTT.

SLIDE #11

So what do you do to fix all that fragmentation? Now this is where my marketing kicks in.

Right there is our booth, we are DDVTech, we make MistServer and it's a technology you can use to build your own systems on top of. We give you the engine you use underneath your own system and we help you solve all of these problems so you can focus on what makes your business unique and not have to worry about standardization, what to implement and what the next hot thing tomorrow is going to be.

We also allow you to auto-select protocols based on the stream contents and device you're trying to play on or what the network conditions are. Basically everything you need to be successful when you're building an OTT platform.

SLIDE #12

That’s the end of my presentation, if you have any questions you can drop by our booth or shoot us an email on our info address and we’ll help you out and get talking.

read moreless
23 Aug 2018

[Blog] How to build a Twitch-alike service with MistServer

Hey all! First of all, our apologies for the lack of blog posts recently - we've been very busy with getting the new 2.14 release out to everyone. Expect more posts here so that we can catch back up to...

Hey all! First of all, our apologies for the lack of blog posts recently - we've been very busy with getting the new 2.14 release out to everyone. Expect more posts here so that we can catch back up to our regular posting pace!

Anyway - hi 👋! This is Jaron, not with a technical background article (next time, I promise!) but with a how-to on how you can build your own social streaming service (like Twitch or YouTube live) using MistServer. We have more and more customers running these kind of implementations lately, and I figured it would be a good idea to outline the steps needed for a functional integration for future users.

A social streaming service, usually has several common components:

  • A login system with users
  • The ability to push (usually RTMP) to an "origin" server (e.g. sending your stream to the service)
  • A link between those incoming pushes and the login system (so the service knows which stream belongs to which user)
  • A check to see if a viewer is allowed to watch a specific stream (e.g. paid streams, password-protected streams, age restricted stream, etc)
  • The ability to record streams and play them back later as Video on Demand

Now, MistServer can't help you with the login system - but you probably don't want it to, either. You'll likely already have a login system in place and want to keep that and its existing database. It's not MistServer's job to keep track of your users anyway. The Unix philosophy is to do one thing and do it well, and Mist does streaming; nothing else.

How to support infinite streams without configuring them all

When you're running a social streaming service, you need to support effectively infinite streams. MistServer allows you to configure streams over the API, but that is not ideal: Mist start to slow down after a few hundred streams are configured, and the configuration becomes a mess of old streams.

Luckily, MistServer has a feature that allows you to configure once, and use that stream config infinite times at once: wildcard streams. There's no need to do anything special to activate wildcard mode: all live streams automatically have it enabled. It works by placing a plus symbol (+) behind the stream name, followed by any unique text identifier. For example, if you configured a stream called "test" you could broadcast to the stream "test", but also to "test+1" and "test+2" and "test+foobar". All of them will use the configuration of "test", but use separate buffers and have separate on/off states and can be requested as if they are fully separate streams.

So, a sensible way to set things up is to use for example the name "streams" as stream name, and then put a plus symbol and the username behind it to create the infinite separate streams. For example, user "John" could have the stream "streams+John".

Receiving RTMP streams in the commonly accepted format for social streaming

Usually, social streaming uses RTMP URLs following a format similar to rtmp://example.com/live/streamkey. However, MistServer uses the slightly different format rtmp:example.com/passphrase/streamname. It's inconvenient for users to have to comply with Mist's native RTMP URL format, so it makes sense to tweak the config so they are able to use a more common format instead.

The ideal method for this is using the RTMP_PUSH_REWRITE trigger. This trigger will call an executable/script or retrieve an URL, with as payload the RTMP URL and the IP address of the user attempting to push, before MistServer does any parsing on it whatsoever. Whatever your script or URL returns back to MistServer is then parsed by MistServer as-if it was the actual RTMP URL, and processing continues as normal afterwards. Blanking the returned URL results in the push attempt being rejected and disconnected. Check MistServer's manual (under "Integration", subchapter "Triggers") for the documentation of this trigger.

An example in PHP could look like this:

<?PHP
//Retrieve the data from Mist
$payload = file_get_contents('php://input');
//Split payload into lines
$lines = explode("\n", $payload);
//Now $lines[0] contains the URL, $lines[1] contains the IP address.

//This function is something you would implement to make this trigger script "work"
$user = parseUser($lines[0], $lines[1]);
if ($user != ""){
  echo "rtmp://example.com//streams+".$user;
}else{
  echo ""; //Empty response, to disconnect the user
}
//Take care not to print anything else after the response, not even any newlines! MistServer expects a single line as response and nothing more.

The idea is that the parseUser function looks up the stream key from the RTMP URL in a database of users, and returns the username attached to that stream key. The script then returns the new RTMP URL as rtmp://example.com//streams+USERNAME, effectively allowing the push as well as directing it to a unique stream for the authorized user. Problem solved!

How to know when a user starts/stops broadcasting

This one is pretty easy with triggers as well: the STREAM_BUFFER trigger is ideal for this purpose. The STREAM_BUFFER trigger will go off every time the buffer changes state, meaning that it goes off whenever it fills, empties, goes into "unstable" mode or "stable" mode. Effectively, MistServer will let you know when the stream goes online and offline, but also when the stream settings aren't ideal for the user's connection and when they go back to being good again. All in real-time! Simply set up the trigger and store the user's stream status into your own local database to keep track. Check MistServer's manual (under "Integration", subchapter "Triggers") for the documentation of this trigger.

Access control

Now, you may not want every stream accessible for every user. Limiting this access in any way, is a concept usually referred to as "access control". My colleague Carina already wrote an excellent blog post on this subject last year, and I suggest you give it a read for more on how to set up access control with MistServer.

Recording streams and playing them back later

The last piece of the puzzle: recording and Video on Demand. To record streams, you can use our push functionality. This sounds a little out of place, until you wrap your head around the idea that MistServer considers recording to be a "push to file". A neat little trick is that configuring an automatic push for the stream "stream+" will automatically activate this push for every single wildcard instance of the stream "stream"! Combined with our support for text replacements (detailed in the manual in the chapter 'Target URLs and settings'), you can have this automatically record to file. For example, a nice target URL could be: /mnt/recordings/$wildcard/$datetime.mkv. That URL will sort recordings into folders per username and name the files after the date and time the stream started. This example records in Matroska (MKV) format (more on that format in my next blog post, by the way!), but you could also record in FLV or TS format simply by changing the extension.

If you want to know when a recording has finished, how long it is, and what filename it has... you guessed it, we have a trigger for that purpose too. Specifically, RECORDING_END. This trigger fires off whenever a recording finishes writing to file, and the payload contains all relevant details on the new recording. As with the previous triggers, the manual, under "Integration", subchapter "Triggers", has all relevant details.

There is nothing special you need to do to make the recordings playable through MistServer as well - they can simply be set up like any other VoD stream. But ideally, you'll want to use something similar to what was described in another of our blogposts last year, on how to efficiently access a large media library. Give it a read here, if you're interested.

In conclusion

Hopefully that was all you needed to get started with using MistServer for social streaming! As always, contact us if you have any questions or feedback!

read moreless
5 Jun 2018

[Blog] Generating a live test stream from a server using command line

Hello everyone! Today I wanted to talk about testing live streams. As you will probably have guessed: in order to truly test a live stream you'll need to be able to give it a live input. In some cases that...

Hello everyone! Today I wanted to talk about testing live streams. As you will probably have guessed: in order to truly test a live stream you'll need to be able to give it a live input. In some cases that might be a bit of a challenge, especially if you only have shell access and no live input available. It's for those situations that we've got a script that uses ffmpeg to generate a live feed which we call videogen. The script itself is made for Linux servers, but you could take the ffmpeg command line and use it for any server able to run ffmpeg.

What is videogen

Videogen is a simple generated live stream for testing live input/playback without the need for an actual live source somewhere. It is built on some of the examples available at the ffmpeg wiki site. It looks like this:

picture of videogen in action

ffmpeg

As you might've suspected, in order to use videogen you'll need ffmpeg. Make sure to have it installed or have the binaries available in order to run videogen.

Installing videogen

Place the videogen file in your /usr/local/bin directory, or make your own videogen by pasting this code in a file and making it executable:

 #!/bin/bash

 ffmpeg -re -f lavfi -i "aevalsrc=if(eq(floor(t)\,ld(2))\,st(0\,random(4)*3000+1000))\;st(2\,floor(t)+1)\;st(1\,mod(t\,1))\;(0.6*sin(1*ld(0)*ld(1))+0.4*sin(2*ld(0)*ld(1)))*exp(-4*ld(1)) [out1]; testsrc=s=800x600,drawtext=borderw=5:fontcolor=white:fontsize=30:text='%{localtime}/%{pts\:hms}':x=\(w-text_w\)/2:y=\(h-text_h-line_h\)/2 [out0]" \
 -acodec aac -vcodec h264 -strict -2 -pix_fmt yuv420p -profile:v baseline -level 3.0 \
 $@

Using Videogen

Videogen is rather easy to use, but it does require some manual input as you need to specify the output, but you can specify any of the codecs inside as well incase you want/need to use something else than our default settings.

The only required manual input is the type of output you want and the output URL (or file). For MistServer your output options are:

RTMP

 videogen -f flv rtmp://ADDRESS/APPLICATION/STREAM_NAME

RTSP

 videogen -f rtsp rtsp://ADDRESS:PORT/STREAM_NAME

TS Unicast

 videogen -f mpegts udp://ADDRESS:PORT

TS Multicast

 videogen -f mpegts udp://MULTICASTADDRESS:PORT

As it's all run locally it doesn't really matter which protocol you'll be using except for one point. RTMP cannot handle multi bitrate using this method, so if you want to create a multi bitrate videogen you'll usually want to use TS.

Additional parameters

You'll have access to any of the additional parameters that ffmpeg provides for both video and audio encoding simply by just adding them after the videogen command. Ffmpeg handles the last given parameters if they overwrite previously given parameters. For all the ffmpeg parameters we recommend checking the ffmpeg documentation for codecs, video and audio.

Some of the parameters we tend to use more often are:

-g NUMBER

This determines when keyframes show up. This sets the amount of frames to pass before inserting a keyframe. When set to 25 you'll get one keyframe per second, as videogen runs at 25fps.

-s RESOLUTIONxRESOLUION

This changes the resolution. The default of videogen is 800x600, so setting this to 1920x1080 will make it a "HD" stream, though the quality is barely noticeable with this script. We tend to use screen resolutions to verify a track is working correctly.

-c:v hevc or -c:v h264

This changes the video codec. The default is h264 baseline profile of 3.0, which should be compatible with any modern device. Changing the codec to H265 (HEVC) or "default" h264 changes things and might be exactly what you want to find out. Do note that HEVC cannot work over RTMP, use RTSP or TS instead!

-c:a mp3 -ar 44100

This changes the audio codec. The default is aac, so knowing how to set mp3 instead can be handy. Just be sure to add an audio rate as MP3 tends to bug out when it's not set. We tend to use 44100 as most devices will work with this audio rate.

Multibitrate videogen

Obviously you would want to try out a multi bitrate videogen as well, which you can do but will want to use TS for instead of RTMP as RTMP cannot handle multi bitrate through a single stream as push input.

You can find our multi bitrate videogen here.

You can also make an executable file with the following command in it:

 #!/bin/bash

 #multibitrate videogen stuff if you want to edit qualities or codecs edit the parameters per track profile. If you want to add qualities just be sure to map it first (as audio or video depending on what kind of track you want to add). Videotracks will generally need the -pix_fmt yuv420p in order to work with this script.

 exec ffmpeg -hide_banner -re -f lavfi -i "aevalsrc=if(eq(floor(t)\,ld(2))\,st(0\,random(4)*3000+1000))\;st(2\,floor(t)+1)\;st(1\,mod(t\,1))\;(0.6*sin(1*ld(0)*ld(1))+0.4*sin(2*ld(0)*ld(1)))*exp(-4*ld(1)) [out1]; testsrc=s=800x600,drawtext=borderw=5:fontcolor=white:fontsize=30:text='%{localtime}/%{pts\:hms}':x=\(w-text_w\)/2:y=\(h-text_h-line_h\)/2 [out0]" \
 -map a:0 -c:a:0 aac -strict -2 \
 -map a:0 -c:a:1 mp3 -ar:a:1 44100 -ac:a:1 1 \
 -map v:0 -c:v:0 h264 -pix_fmt yuv420p -profile:v:0 baseline -level 3.0 -s:v:0 800x600 -g:v:0 25 \
 -map v:0 -c:v:1 h264 -pix_fmt yuv420p -profile:v:1 baseline -level 3.0 -s:v:1 1920x1080 -g:v:1 25  \
 -map v:0 -c:v:2 hevc -pix_fmt yuv420p -s:v:2 1920x1080 -g:v:2 25 \
 $@ 

This will create a multi bitrate video stream with aac and mp3 audio and a 800x600, 1920x1080 h264 video stream and a single 1920x1080 h265 (HEVC) stream. That should cover "most" multi bitrate needs.

You will always want to combine this with the ts output for ffmpeg, so using it will come down to:

 multivideogen -f mpegts udp://ADDRESS:PORT

Using videogen or multivideogen with MistServer directly

Of course you can also use videogen or multivideogen without a console, you will still have to put the scripts on your server (preferably the /usr/local/bin folder) however.

To use them together with MistServer just use ts-exec and the mpegts output of ffmpeg like this:

MistServer source:

 ts-exec:videogen -f mpegts -
 ts-exec:multivideogen -f mpegts -

Example of how to fill in MistServer in the Interface

You can put the streams on always on to have a continuous live stream or leave them on default settings and only start the live stream when you need it. Keep in mind that as long as they're active they will use CPU.

read moreless
Latest 2019 2018 2017 2016 2015 2014 2013 2012