[Blog] MistServer and Secure Reliable Transport (SRT)
Hello everyone, this article is about using Haivison SRT together with MistServer.
What is Secure Reliable Transport (SRT)?
Secure Reliable Transport, or SRT for short, is a method to send stream data over unreliable network connections. Do note that it is meant for server traffic, there are no SRT players. The main advantage of SRT is that it allows you to push a playable stream over networks that otherwise would not work properly. However, keep in mind that for "perfect connections" it would just add unnecessary latency. So it is mostly something to use when you have to rely on public internet connections.
How to use SRT in MistServer
SRT is implemented to behave like srt-live-transmit, so using SRT under MistServer should feel familiar if you’re familiar with the usage of srt-live-transmit. Filling in a host on one side implies caller mode while leaving the host out implies listener mode.
The only difference is that you do not set up the input or output side depending on how you’re using SRT. You will only need to set up one side of the connection as the other side will be implied by the usage. If you can also overwrite any “default” setting by using
Not setting a host will imply Listener mode for the input/output
Setting a host will imply Caller mode for the input/output
You can always overwrite a mode by using
?mode=caller/listener as a parameter
Not setting a host will default the bind to: 0.0.0.0
Both caller/listener inputs are set up through setting up a new stream through the stream panel.
SRT LISTENER INPUT
This input is set up by creating/editing a new stream and setting the following source:
By leaving out the host you will imply Listener mode, thus instructing MistServer to look at all available addresses on the given port for SRT stream data. Another method would be:
This will force MistServer to open the given address/port as listener mode (host is optional, if left out 0.0.0.0 will be used).
If no SRT data is given it will behave like other MistServer inputs, if set to “always on” it will keep on trying and trying. If set to “default” it will try for about 20 seconds, then retry once a new viewer tries to open the stream.
SRT CALLER INPUT
This input is set up by creating/editing a new stream and setting the following source:
By setting the host you will put the SRT input in caller mode, thus instructing it to connect to the given address/port and look for SRT data to receive.
Another method would be:
Though not recommended you could use this to set up caller mode. The reason why it’s unrecommended is that giving a host already implies caller mode and leaving host out will default to 0.0.0.0 which is nonsensical for a caller mode.
Both Styles are available through the push panel, Listener output is available through the protocol panel as well.
SRT LISTENER OUTPUT
Two methods to set this up, a “sort of” temporarily through the push panel and a more permanent one through the protocol panel.
Push panel style
Setting up SRT LISTENER output through the push panel is done through the Push panel and setting up a push stream with target:
This will set up MistServer to push out the stream and accept incoming connections. Those that connect will jump to the current live point of the stream, whether it’s Live or VoD. There is no starting from the beginning here. The push will stop once it reaches the stream end or the source input disappears.
Protocol Panel Style
Setting up SRT Listener output through the protocol panel is done by selecting TS over SRT and setting up the following:
- Set up the source input by filling in the stream name.
- Choose a port (optional: Host too)
This will start the input and make the stream available for viewers. VoD files will always start at the beginning of the VoD file, while Live streams will go to the most live point.
SRT CALLER OUTPUT
This is only done through the push panel, set up a new push and use the following target:
Again, the alternative mode is not recommended as it does not make much sense. Setting an SRT output in caller mode will connect to an SRT listener and start pushing if it makes a connection. VoD will always start at the beginning and live will start at the most live point. If there is no connection made within ~10 seconds it will close down and only start up if it is an automatic push with retry enabled.
All SRT over a single Port
SRT can also be set up to work through a single port using the ?streamid parameters. Within the MistServer Protocol panel you can set up SRT (default 8889) to accept connections coming in, out or both.
If set to incoming connections, this port can only be used for SRT connections going into the server. If set to outgoing the port will only be available for SRT connections going out of the server. If set to both, SRT will try to listen first and if nothing happens in 3 seconds it will start trying to send out a connection when contact has been made. Do note that we have found this functionality to be buggy in some implementations of Unix (Ubuntu 18.04) or highly unstable connections.
Once set up you can use SRT in a similar fashion as RTMP or RTSP. You can pull any available stream within MIstServer using SRT and push towards any stream that’s setup to receive incoming pushes. It makes the overall usage of SRT a lot easier as you do not need to set up a port per stream.
Pushing towards SRT using a single port
You can push towards a MistServers incoming SRT connection port using:
Do note that the stream has to be set up to accept incoming pushes.
Pulling SRT from MistServer using a single port
You can pull from a MistServer using it’s outgoing SRT connection port:
Known issue in some of the Linux OSs (like Ubuntu 18.04)
The SRT library we use for the native implementation has one issue in some Linux distros. Our default usage for SRT is to accept both incoming and outgoing connections. Some Linux distro have a bug in the logic there and could get stuck on waiting for data while they should be pushing out when you're trying to pull an SRT stream from the server. If you notice this you can avoid the issue by setting a port for outgoing SRT connections and another port for incoming SRT connections. This setup will also win you ~3seconds of latency when used. The only difference is that the port changes depending on whether the stream data comes into the server or leaves the server.
Recommendations and best practices
The most flexible method of working with SRT is using SRT over a single port. Truly using a single port brings some downsides in terms of latency and stability however. Therefore we recommend setting up 2 ports, one for input and one for output and then using these together with ?streamid parameters.
Getting SRT to work better
There are several parameters (options) you can give to any SRT url to set up the SRT connection better, anything using the SRT library should be able to handle these parameters. These are often overlooked and forgotten as most first users tend to just fill in the urls and see it does not work how they would like it to and stop trying there and then. Now understand that the default settings of any SRT connection cannot be optimized for your connection from the get go. The defaults will work under good network conditions, but are not meant to be used as is in unreliable connections.
A full list of options you can use can be found in the SRT documentation.
Using these options is as simple as setting a parameter within the url, making them lowercase and stripping the SRTO_ part. For example
&streamid= depending on if it’s the first or following parameter.
We highly recommend starting out with the parameters below as these make all the difference in the world for stream quality especially with bad connections where SRT should be used.
This is what we consider the most important parameter to set for unstable connections. Simply put, it is the time SRT will wait for other packets coming in before sending it on. As you might understand if the connection is bad you will want to give the process some time. It’ll be unrealistic to just assume everything got sent over correctly at once as you wouldn’t be using SRT otherwise! Haivision themselves recommend setting this as:
RTT_Multiplier * RTT
RTT = Round Time Trip, basically the time it takes for the servers to reach each other back and forth. If you’re using ping or iperf remember you will need to double the ms you get.
RTT_Multiplier = A multiplier that indicates how often a packet can be sent again before SRT gives up on it. The values are between 3 and 20, where 3 means perfect connection and 20 means 100% packet loss.
Now what Haivision recommends is using their table depending on your network constraints, however if you are anything like me and do not want to spend time on such calculations I would recommend using the following and going up a step whenever you see it is still not working properly:
1: 4 x RTT 2: 8 x RTT 3: 12 x RTT 4: 16 x RTT 5: 20 x RTT
While it is not the best setting, it does get the job done. You might lose out on latency, but our priority with SRT is ensuring stream stability, not latency.
This option enables forward error correction, which in turn can help stream stability. A very good explanation on how to tackle this is available here. Important to note here is that it is recommended that one side has no settings and the other sets all of them. In order to do this the best you should have MistServer set no settings and any incoming push towards MistServer set the forward error correction filter.
Our personal default setting is:
We start with this and have not had to switch it yet if mixed together with a good latency filter. Now optimizing this is obviously the best choice, but using “something” is already better than nothing in this case.
Combining multiple parameters
To avoid confusion, these parameters work like any other parameters for urls. So the first one always starts with a
? while every other starts with an
Hopefully this should've given you enough to get started with SRT on your own. Of course if there's any questions left or you run into any issues feel free to contact us and we'll happily help you!
[Release] Release notes summary 3.0
After years of work on what would’ve been an “easy” 3 month project that went slightly out of scope we’re proud to announce the release of MistServer 3.0!
So, why 3.0? Basically we have redone the entire code base of MistServer making it impossible to do a rolling update from 2.X versions. This means upgrading will require dropping all current connections, as it needs to happen while MistServer is turned off. Your configuration, usage of MistServer and integration with other applications through triggers and the API will all stay the same. And of course once rebooted MistServer should behave just as you were used to, but with much lower latency.
All in all you should see improved performance and newly added features in the same old trusted interface. We have plans to upgrade the interface to something more modern as well, but we did not want to delay the 3.0 release any longer either. Perhaps more importantly we have made the decision to make all of the MistServer project fully open source without any restrictions! If you want to read more on why we did this, you can read more about it on our blog
Do note: the 3.0 release is only available for Linux-based systems at the moment. We will follow up with a 3.1 release soon that will also update the Windows and MacOS versions. Do expect our next few updates to come faster!
- Everything, including previously Pro-only features, is now Public Domain software.
- New protocols:
- WebRTC (input and output)
- WS/MP4 (output)
- SRT (native support; input and output)
- CMAF push (output)
- LLHLS (output)
- New live stream processing system feature, with:
- Livepeer process
- ffmpeg integration
- generic MKV-based process for easy integration with practically any other software
- Core buffer rewrite
- Massive latency reduction (previously 1-2 sec, now 2-3 frames end-to-end)
- A very long list of bug fixes and other improvements. See the full changelog for details!
[Blog] Migration instructions between 2.X and 3.X
With the release of 3.0, we are releasing a version that has gone through extensive rewrites of the internal buffering system.
Many internal workings have been changed and improved. As such, there was no way of keeping compatibility between running previous versions and the 3.0 release, making a rolling update without dropping connections not feasible.
In order to update MistServer to 3.0 properly, step one is to fully turn off your current version. After that, just run the installer of MistServer 3.0 or replace the binaries.
Process when running MistServer through binaries
- Shut down MistController
- Replace the MistServer binaries with the 3.0 binaries
- Start MistController
Process when running MistServer through the install script
Shut down MistController Systemd:
systemctl stop mistserver
service mistserver stop
Start MistServer install script:
curl -o - https://releases.mistserver.org/is/mistserver_64Vlatest.tar.gz 2>/dev/null | sh
Process for completely uninstall MistServer and installing MistServer 3.0
curl -o - https://releases.mistserver.org/uninstallscript.sh 2>/dev/null | sh
curl -o - https://releases.mistserver.org/is/mistserver_64Vlatest.tar.gz 2>/dev/null | sh
Enabling new features within MistServer
You can enable the new features within MistServer by going to the protocol panel and enabling them. Some other protocols will have gone out of date, like OGG (to be added later on), DASH (replaced by CMAF) and HSS (replaced by CMAF, as Microsoft no longer supports HSS and has moved to CMAF themselves as well). The missing protocols can be removed/deleted without worry. The new protocols can be added manually or automatically by pressing the “enable default protocols” button
Rolling back from 3.0 to 2.x
Downgrading MistServer from 3.0 to 2.x will also run into the same issue that it is unable to keep connections active, which means you will have to repeat the process listed above with the end binaries/install link being the 2.x version. If you deleted the old 2.X protocols during the 3.X upgrade, you will have to re-add them using the same “enable default protocols” method as well. It is safe to have both sets in your configuration simultaneously if you switch between versions a lot, or need a single config file that works on both.
[Blog] Why is all of MistServer open source?
Hey there! This is Jaron, the lead developer behind MistServer and one of its founding members. Today is a very special day: we released MistServer 3.0 under a new license (Public Domain instead of aGPLv3) and decided to include all the features previously exclusive to the "Pro" edition of MistServer as well. That means there is now only one remaining version of MistServer: the free and open source version.
You may be wondering why we decided to do this, so I figured I'd write a blog post about it.
First some history! The MistServer project first started almost fifteen years ago, with a gaming-focused live streaming project that was intended to be a rival to the service that would later become known as Twitch. At the time, we relied on third-party technology to make this happen, and internet streaming was still in its infancy in general. Needless to say, that project failed pretty badly.
During a post-mortem meeting for that failed project, the live streaming tech we relied on came up as one of the factors that caused the project to fail. In particular how this software acted like a black box, and made it very tricky to integrate something innovative with it. The question came up if we could have done better, ourselves. We figured we probably could, and decided to try it. After all, how hard could it be to write live streaming server software, right..?
What we thought would be a short and fun project, quickly turned into something much bigger. The further we got, the more we discovered that video tech was - back then especially - a very closed off industry that is hard to get into. As we worked on the software, we came up with the idea that we wanted to change this. Open it up to newcomers, like we ourselves had tried, and make it possible for anyone with a good idea to make a successful online media service. There were several popular free and open source web server software packages, like Apache and Nginx - but all the media server software was closed (and usually quite expensive, as well). We wanted to do the same thing for media server software: create something open, free, and easy to use for developers of all backgrounds to enable creativity to flourish.
However, we also had people working on this software full-time that most definitely needed to be paid for their efforts. So while the first version of MistServer was already partially open source, we made a few hard decisions: we kept the most valuable features closed source, and the parts that were open were licensed under the aGPLv3. That license is an "infectious" open source license: it requires anyone communicating over a network with the software to get the full source code of the whole project. That would make it almost impossible to use in a commercial environment - both because of the missing features as well as the aggressive license.
That allowed us to then sell custom licensing and support contracts, while staying true to the ideas behind open source. Our plan was to eventually - as we had built up enough customer base and could afford to make this decision - release the whole software package as open source and solely sell support and similar services contracts. As we were funded by income from license sales, our growth was fairly restricted and thus slow and organic. We built up a good reputation, but were nowhere near being able to proceed with the plan we made at the start.
Over the years, we slowly did release some of the closed parts as open, but we had to be careful not to "give away" too much. To non-commercial users, we made available a very cheap version of the license without support. Our license and support contracts over time evolved to be mostly about support, and licensing itself more of an excuse to start discussing support terms. From my own interactions with our customers it has become clear that they stay with us because of the support we offer, and consider that the most valuable part of their contracts with us. However, the constrained growth did mean we were not able to fully commit to a business model that did not involve selling licenses.
Until, last October, Livepeer came along. They have a similar goal and mindset as the MistServer team did and does, which meant they not only understood our long-term plan, but believed that with the increased funding flow they brought to us, it could now finally be executed!
So, it may seem like a sudden change of course for us to release the full software as open source today, but nothing could be further from the truth. It's something we've believed in and have been wanting to do right from the start of the project. Words are lacking to describe how it feels to finally be able to come full circle and complete a plan that has been so long in the making. It's an extremely exciting moment for us, and I speak for the whole team when I say we're looking forward to continuing to improve and share MistServer with the world.
[News] MistServer has been acquired by Livepeer
DDVTech and the MistServer company has been acquired by Livepeer, Inc. If you want to read their own blogpost about the acquisition you can do so here.
To start things off, a quote from our CTO Jaron:
There could not be a more fitting team to work with, as Livepeer and DDVTech perfectly align in terms of philosophy and long-term goals. I’m looking forward to further growing both the MistServer product and the Livepeer brand in general, and bringing more exciting technologies to the open video infrastructure ecosystem. The resources and experience that the combined teams bring to the table open up a staggering amount of possibilities. This is a very exciting time for all of us.
So what does this mean for MistServer users and the future of MistServer?
There will be no changes to the product and service you are familiar with. The entire development team of MistServer is still intact and still working full-time on the MistServer project. You can keep talking to the points of contact you've been talking to so far. Commercial-related operations will gradually be transferred to the Livepeer team, so that the MistServer team can keep their focus on purely development and support related tasks.
The extra resources Livepeer is making available to us will allow us to accelerate development and provide you with an even better product and service going forward.
If you're worried about anything or have questions, we'd be happy to talk to you. You can reach out to the new commercial team at firstname.lastname@example.org, or reach the existing MistServer team through the usual methods (our contact form or directly to your existing point of contact).