[Release] 3.1 stable release
The 3.1 release is upon us! Downloads can be found here.
First of all: apologies, this build is still Linux-only for now. While a lot of progress has been made on fixing MacOS and Windows compatibility in the past few months, it's not quite ready yet and we felt it prudent to release some of the fixes and features this release brings earlier rather than later.
This release also marks a new style of release post. We now mention noteworthy features with some text explaining what the feature is and why you'd want to use it, and of course the full changelog is still available as per usual.
What: Support for Apple's LLHLS (Low Latency HTTP Live Streaming) protocol was already included in 3.0. However, that support is CMAF-based (the latest industry standard). HLS also supports an older TS-based segmenting method - this feature adds support for LLHLS in that format as well.
Why: This feature is useful for those that want/need to support LLHLS but are bound to using MPEG2-TS-based playback for some arbitrary reason (e.g. legacy device support, weird TVs they are using, etc).
Forward Error Correction
What: This adds support for ProMPEG Forward Error Correction to our MPEG2-TS-based output over UDP. FEC adds an additional layer of error correction packets on top of the normal packets, to allow for recovering from packet loss using parity data.
Why: Sometimes a connection can only be made over UDP, but there is a small amount of packet loss you want to correct for. This makes that possible. If you're not stuck using UDP, there are likely better solutions than this one.
Input SDP from file / Push output SDP to file
What: RTSP, WebRTC and SIP (VoIP) all use RTP-based transport for their media data. Some users want to bypass those signalling protocols and instead work with raw RTP over the network. The SDP input and output allows you to preconfigure the "handshake" normally done over the signalling protocol, so you can directly send or receive media without needing to do that handshake.
Why: This is especially useful when broadcasting multicast over, say, a company intranet - since the "handshake" should only be done once but there may be potentially infinite receivers in the network. They can all open the SDP file to receive the data needed to connect. Until this feature became available, multicast over the local network was not possible using RTP and only using raw UDP (in an MPEG2-TS transport). The support for SDP is not only multicast usecases, though - that is just the most noteworthy one.
AAC file input support
What: The AAC codec is already supported in Mist, but separate
.aac files are not (they need to be inside some other container, instead). This update adds support for plain
.aac files as a VoD input format.
Why: Because this is the most efficient/best format to store separate AAC files, and it wasn't supported yet. In addition, reading AAC files is needed to be able to replace AAC audio tracks with custom audio when pushing RTMP out, a feature that is also part of this release.
Support for overriding AAC audio in RTMP push output
What: If a stream contains AAC audio, this feature lets you replace the audio in a stream with a looped version of an external AAC audio file. The original audio is not transmitted.
Why: This was built to support the use case that the stream may contain audio material that causes e.g. Youtube or Twitch to block the stream (e.g. copyrighted songs etc), and you want to replace the audio track with a placeholder for sending a "lite" version of the stream to these platforms.
Split config support
What: The ability to split up the config file by section, so that each section can be loaded from a different file on the system.
Why: This makes it easier to set up a generic config that can be re-used and synced across multiple installs, while keeping some of the config stored separately for each install that won't be overwritten. It's also possible to make part of the config read-only and other parts read-write by setting file permissions of their respective files accordingly.
What: RIST is a semi-reliable transport, similar to Haivision SRT, Zixi and BRT. This means that it makes a connection that cannot be controlled (e.g. over the public internet instead of private networks) fairly predictable in behaviour. This adds support for RIST push output and RIST pull input, both in both caller and callee modes. Unfortunately our build system does not yet support building the RIST library itself, so this feature is currently only available if you compile MistServer yourself.
Why: RIST is rapidly gaining popularity, and there is still no clear "winner" in the reliable transport protocols segment today. Mist aspires to connect anything to anything, so we try to support as many relevant protocols as possible.
HEVC/H265 support in browsers
What: Most browsers don't support decoding of H265/HEVC video streams through any method (with or without hardware acceleration). A notable exception is recent versions of iOS, but for the rest mobile and desktop browsers alike generally won't support this codec. It is, however, possible to transpile
Why: H265/HEVC has a much better compression ratio than H264/AVC does, which can be critical in ultra low bandwidth situations. Unfortunately, software decoding of H265/HEVC is fairly slow - so this works best for low framerate or low resolution streams. The player does automatically skip frames to stay roughly at real time speed if/as needed, though - so even full resolution playback should be acceptable quality. There is another downside: the player currently only supports video, no audio.
[Blog] MistServer and Secure Reliable Transport (SRT)
Hello everyone, this article is about using Haivison SRT together with MistServer.
What is Secure Reliable Transport (SRT)?
Secure Reliable Transport, or SRT for short, is a method to send stream data over unreliable network connections. Do note that it is meant for server traffic, there are no SRT players. The main advantage of SRT is that it allows you to push a playable stream over networks that otherwise would not work properly. However, keep in mind that for "perfect connections" it would just add unnecessary latency. So it is mostly something to use when you have to rely on public internet connections.
How to use SRT in MistServer
SRT is implemented to behave like srt-live-transmit, so using SRT under MistServer should feel familiar if you’re familiar with the usage of srt-live-transmit. Filling in a host on one side implies caller mode while leaving the host out implies listener mode.
The only difference is that you do not set up the input or output side depending on how you’re using SRT. You will only need to set up one side of the connection as the other side will be implied by the usage. If you can also overwrite any “default” setting by using
Not setting a host will imply Listener mode for the input/output
Setting a host will imply Caller mode for the input/output
You can always overwrite a mode by using
?mode=caller/listener as a parameter
Not setting a host will default the bind to: 0.0.0.0
Both caller/listener inputs are set up through setting up a new stream through the stream panel.
SRT LISTENER INPUT
This input is set up by creating/editing a new stream and setting the following source:
By leaving out the host you will imply Listener mode, thus instructing MistServer to look at all available addresses on the given port for SRT stream data. Another method would be:
This will force MistServer to open the given address/port as listener mode (host is optional, if left out 0.0.0.0 will be used).
If no SRT data is given it will behave like other MistServer inputs, if set to “always on” it will keep on trying and trying. If set to “default” it will try for about 20 seconds, then retry once a new viewer tries to open the stream.
SRT CALLER INPUT
This input is set up by creating/editing a new stream and setting the following source:
By setting the host you will put the SRT input in caller mode, thus instructing it to connect to the given address/port and look for SRT data to receive.
Another method would be:
Though not recommended you could use this to set up caller mode. The reason why it’s unrecommended is that giving a host already implies caller mode and leaving host out will default to 0.0.0.0 which is nonsensical for a caller mode.
Both Styles are available through the push panel, Listener output is available through the protocol panel as well.
SRT LISTENER OUTPUT
Two methods to set this up, a “sort of” temporarily through the push panel and a more permanent one through the protocol panel.
Push panel style
Setting up SRT LISTENER output through the push panel is done through the Push panel and setting up a push stream with target:
This will set up MistServer to push out the stream and accept incoming connections. Those that connect will jump to the current live point of the stream, whether it’s Live or VoD. There is no starting from the beginning here. The push will stop once it reaches the stream end or the source input disappears.
Protocol Panel Style
Setting up SRT Listener output through the protocol panel is done by selecting TS over SRT and setting up the following:
- Set up the source input by filling in the stream name.
- Choose a port (optional: Host too)
This will start the input and make the stream available for viewers. VoD files will always start at the beginning of the VoD file, while Live streams will go to the most live point.
SRT CALLER OUTPUT
This is only done through the push panel, set up a new push and use the following target:
Again, the alternative mode is not recommended as it does not make much sense. Setting an SRT output in caller mode will connect to an SRT listener and start pushing if it makes a connection. VoD will always start at the beginning and live will start at the most live point. If there is no connection made within ~10 seconds it will close down and only start up if it is an automatic push with retry enabled.
All SRT over a single Port
SRT can also be set up to work through a single port using the ?streamid parameters. Within the MistServer Protocol panel you can set up SRT (default 8889) to accept connections coming in, out or both.
If set to incoming connections, this port can only be used for SRT connections going into the server. If set to outgoing the port will only be available for SRT connections going out of the server. If set to both, SRT will try to listen first and if nothing happens in 3 seconds it will start trying to send out a connection when contact has been made. Do note that we have found this functionality to be buggy in some implementations of Unix (Ubuntu 18.04) or highly unstable connections.
Once set up you can use SRT in a similar fashion as RTMP or RTSP. You can pull any available stream within MIstServer using SRT and push towards any stream that’s setup to receive incoming pushes. It makes the overall usage of SRT a lot easier as you do not need to set up a port per stream.
Pushing towards SRT using a single port
You can push towards a MistServers incoming SRT connection port using:
Do note that the stream has to be set up to accept incoming pushes.
Pulling SRT from MistServer using a single port
You can pull from a MistServer using it’s outgoing SRT connection port:
Known issue in some of the Linux OSs (like Ubuntu 18.04)
The SRT library we use for the native implementation has one issue in some Linux distros. Our default usage for SRT is to accept both incoming and outgoing connections. Some Linux distro have a bug in the logic there and could get stuck on waiting for data while they should be pushing out when you're trying to pull an SRT stream from the server. If you notice this you can avoid the issue by setting a port for outgoing SRT connections and another port for incoming SRT connections. This setup will also win you ~3seconds of latency when used. The only difference is that the port changes depending on whether the stream data comes into the server or leaves the server.
Recommendations and best practices
The most flexible method of working with SRT is using SRT over a single port. Truly using a single port brings some downsides in terms of latency and stability however. Therefore we recommend setting up 2 ports, one for input and one for output and then using these together with ?streamid parameters.
Getting SRT to work better
There are several parameters (options) you can give to any SRT url to set up the SRT connection better, anything using the SRT library should be able to handle these parameters. These are often overlooked and forgotten as most first users tend to just fill in the urls and see it does not work how they would like it to and stop trying there and then. Now understand that the default settings of any SRT connection cannot be optimized for your connection from the get go. The defaults will work under good network conditions, but are not meant to be used as is in unreliable connections.
A full list of options you can use can be found in the SRT documentation.
Using these options is as simple as setting a parameter within the url, making them lowercase and stripping the SRTO_ part. For example
&streamid= depending on if it’s the first or following parameter.
We highly recommend starting out with the parameters below as these make all the difference in the world for stream quality especially with bad connections where SRT should be used.
This is what we consider the most important parameter to set for unstable connections. Simply put, it is the time SRT will wait for other packets coming in before sending it on. As you might understand if the connection is bad you will want to give the process some time. It’ll be unrealistic to just assume everything got sent over correctly at once as you wouldn’t be using SRT otherwise! Haivision themselves recommend setting this as:
RTT_Multiplier * RTT
RTT = Round Time Trip, basically the time it takes for the servers to reach each other back and forth. If you’re using ping or iperf remember you will need to double the ms you get.
RTT_Multiplier = A multiplier that indicates how often a packet can be sent again before SRT gives up on it. The values are between 3 and 20, where 3 means perfect connection and 20 means 100% packet loss.
Now what Haivision recommends is using their table depending on your network constraints, however if you are anything like me and do not want to spend time on such calculations I would recommend using the following and going up a step whenever you see it is still not working properly:
1: 4 x RTT 2: 8 x RTT 3: 12 x RTT 4: 16 x RTT 5: 20 x RTT
While it is not the best setting, it does get the job done. You might lose out on latency, but our priority with SRT is ensuring stream stability, not latency.
This option enables forward error correction, which in turn can help stream stability. A very good explanation on how to tackle this is available here. Important to note here is that it is recommended that one side has no settings and the other sets all of them. In order to do this the best you should have MistServer set no settings and any incoming push towards MistServer set the forward error correction filter.
Our personal default setting is:
We start with this and have not had to switch it yet if mixed together with a good latency filter. Now optimizing this is obviously the best choice, but using “something” is already better than nothing in this case.
Combining multiple parameters
To avoid confusion, these parameters work like any other parameters for urls. So the first one always starts with a
? while every other starts with an
Hopefully this should've given you enough to get started with SRT on your own. Of course if there's any questions left or you run into any issues feel free to contact us and we'll happily help you!
[Release] Release notes summary 3.0
After years of work on what would’ve been an “easy” 3 month project that went slightly out of scope we’re proud to announce the release of MistServer 3.0!
So, why 3.0? Basically we have redone the entire code base of MistServer making it impossible to do a rolling update from 2.X versions. This means upgrading will require dropping all current connections, as it needs to happen while MistServer is turned off. Your configuration, usage of MistServer and integration with other applications through triggers and the API will all stay the same. And of course once rebooted MistServer should behave just as you were used to, but with much lower latency.
All in all you should see improved performance and newly added features in the same old trusted interface. We have plans to upgrade the interface to something more modern as well, but we did not want to delay the 3.0 release any longer either. Perhaps more importantly we have made the decision to make all of the MistServer project fully open source without any restrictions! If you want to read more on why we did this, you can read more about it on our blog
Do note: the 3.0 release is only available for Linux-based systems at the moment. We will follow up with a 3.1 release soon that will also update the Windows and MacOS versions. Do expect our next few updates to come faster!
- Everything, including previously Pro-only features, is now Public Domain software.
- New protocols:
- WebRTC (input and output)
- WS/MP4 (output)
- SRT (native support; input and output)
- CMAF push (output)
- LLHLS (output)
- New live stream processing system feature, with:
- Livepeer process
- ffmpeg integration
- generic MKV-based process for easy integration with practically any other software
- Core buffer rewrite
- Massive latency reduction (previously 1-2 sec, now 2-3 frames end-to-end)
- A very long list of bug fixes and other improvements. See the full changelog for details!
[Blog] Migration instructions between 2.X and 3.X
With the release of 3.0, we are releasing a version that has gone through extensive rewrites of the internal buffering system.
Many internal workings have been changed and improved. As such, there was no way of keeping compatibility between running previous versions and the 3.0 release, making a rolling update without dropping connections not feasible.
In order to update MistServer to 3.0 properly, step one is to fully turn off your current version. After that, just run the installer of MistServer 3.0 or replace the binaries.
Process when running MistServer through binaries
- Shut down MistController
- Replace the MistServer binaries with the 3.0 binaries
- Start MistController
Process when running MistServer through the install script
Shut down MistController Systemd:
systemctl stop mistserver
service mistserver stop
Start MistServer install script:
curl -o - https://releases.mistserver.org/is/mistserver_64Vlatest.tar.gz 2>/dev/null | sh
Process for completely uninstall MistServer and installing MistServer 3.0
curl -o - https://releases.mistserver.org/uninstallscript.sh 2>/dev/null | sh
curl -o - https://releases.mistserver.org/is/mistserver_64Vlatest.tar.gz 2>/dev/null | sh
Enabling new features within MistServer
You can enable the new features within MistServer by going to the protocol panel and enabling them. Some other protocols will have gone out of date, like OGG (to be added later on), DASH (replaced by CMAF) and HSS (replaced by CMAF, as Microsoft no longer supports HSS and has moved to CMAF themselves as well). The missing protocols can be removed/deleted without worry. The new protocols can be added manually or automatically by pressing the “enable default protocols” button
Rolling back from 3.0 to 2.x
Downgrading MistServer from 3.0 to 2.x will also run into the same issue that it is unable to keep connections active, which means you will have to repeat the process listed above with the end binaries/install link being the 2.x version. If you deleted the old 2.X protocols during the 3.X upgrade, you will have to re-add them using the same “enable default protocols” method as well. It is safe to have both sets in your configuration simultaneously if you switch between versions a lot, or need a single config file that works on both.
[Blog] Why is all of MistServer open source?
Hey there! This is Jaron, the lead developer behind MistServer and one of its founding members. Today is a very special day: we released MistServer 3.0 under a new license (Public Domain instead of aGPLv3) and decided to include all the features previously exclusive to the "Pro" edition of MistServer as well. That means there is now only one remaining version of MistServer: the free and open source version.
You may be wondering why we decided to do this, so I figured I'd write a blog post about it.
First some history! The MistServer project first started almost fifteen years ago, with a gaming-focused live streaming project that was intended to be a rival to the service that would later become known as Twitch. At the time, we relied on third-party technology to make this happen, and internet streaming was still in its infancy in general. Needless to say, that project failed pretty badly.
During a post-mortem meeting for that failed project, the live streaming tech we relied on came up as one of the factors that caused the project to fail. In particular how this software acted like a black box, and made it very tricky to integrate something innovative with it. The question came up if we could have done better, ourselves. We figured we probably could, and decided to try it. After all, how hard could it be to write live streaming server software, right..?
What we thought would be a short and fun project, quickly turned into something much bigger. The further we got, the more we discovered that video tech was - back then especially - a very closed off industry that is hard to get into. As we worked on the software, we came up with the idea that we wanted to change this. Open it up to newcomers, like we ourselves had tried, and make it possible for anyone with a good idea to make a successful online media service. There were several popular free and open source web server software packages, like Apache and Nginx - but all the media server software was closed (and usually quite expensive, as well). We wanted to do the same thing for media server software: create something open, free, and easy to use for developers of all backgrounds to enable creativity to flourish.
However, we also had people working on this software full-time that most definitely needed to be paid for their efforts. So while the first version of MistServer was already partially open source, we made a few hard decisions: we kept the most valuable features closed source, and the parts that were open were licensed under the aGPLv3. That license is an "infectious" open source license: it requires anyone communicating over a network with the software to get the full source code of the whole project. That would make it almost impossible to use in a commercial environment - both because of the missing features as well as the aggressive license.
That allowed us to then sell custom licensing and support contracts, while staying true to the ideas behind open source. Our plan was to eventually - as we had built up enough customer base and could afford to make this decision - release the whole software package as open source and solely sell support and similar services contracts. As we were funded by income from license sales, our growth was fairly restricted and thus slow and organic. We built up a good reputation, but were nowhere near being able to proceed with the plan we made at the start.
Over the years, we slowly did release some of the closed parts as open, but we had to be careful not to "give away" too much. To non-commercial users, we made available a very cheap version of the license without support. Our license and support contracts over time evolved to be mostly about support, and licensing itself more of an excuse to start discussing support terms. From my own interactions with our customers it has become clear that they stay with us because of the support we offer, and consider that the most valuable part of their contracts with us. However, the constrained growth did mean we were not able to fully commit to a business model that did not involve selling licenses.
Until, last October, Livepeer came along. They have a similar goal and mindset as the MistServer team did and does, which meant they not only understood our long-term plan, but believed that with the increased funding flow they brought to us, it could now finally be executed!
So, it may seem like a sudden change of course for us to release the full software as open source today, but nothing could be further from the truth. It's something we've believed in and have been wanting to do right from the start of the project. Words are lacking to describe how it feels to finally be able to come full circle and complete a plan that has been so long in the making. It's an extremely exciting moment for us, and I speak for the whole team when I say we're looking forward to continuing to improve and share MistServer with the world.