[News] MistServer and Intinor at NAB2018
We're happy to return for our second NAB show coming April. The MistServer team will available from 9-12 April at booth SU13808CM where we'll happily update you with our latest developments. If you're visiting the NAB please feel free to drop by at any time!
Joining us will be Intinor and together we'll be showing their Bifrost Reliable Transport (BRT), which optimizes live video via public internet by adding bonding to reliable transport. You can read more about and the technical paper here.
[Blog] What hardware do I need to run MistServer?
Hey everyone! A very common question we get is about the hardware requirements for MistServer. Like any good piece of software there is no "real" hardware requirement in order to use MistServer, but there is definitely a hardware requirement for what you want to achieve media streaming wise. I’ll go over the main categories for hardware and leave you with a few simple calculators to decide on your hardware specs.
How do I decide on the hardware?
We tend to divide the hardware in 4 necessary categories: CPU, RAM, bandwidth and storage. Each category is important, but depending on your streaming media needs one might be more important than the other.
The CPU is obviously important as it handles all the calculations/requests in your server. Luckily MistServer itself is not that demanding on your system. If you are running other applications aside from MistServer they will probably be more important for your processor choice than MistServer. As MistServer is heavily multi processed it does benefit from processors that can handle more threads or have more cores.
To give you something tangible: if you go to the cpubenchmark mega list every 3.4 points on CPU mark equals one viewer. For example the Intel Xeon E5-2679 v4 @ 2.50GHz comes with a CPU mark of 25236 and will be able to handle 7010 viewers at the same time.
Memory gets some heavier use for media servers as memory is often used for temporary files such as stream data itself. This means it is often used for both incoming and outgoing streams and the required memory can raise quite rapidly. MistServer tries to get a handle on this by sharing the stream data between inputs, outputs and even different protocols when possible. Still, safest is to calculate the necessary memory for the absolute worst case scenario where memory cannot be shared at all!
To calculate the memory use in the worst case scenario when using MistServer you will require memory per viewer and the amount depends on the quality of your stream. MistServer needs roughly 12.5MB for every megabit of incoming stream bandwidth under the default MistServer settings. So obviously the more streams or stream tracks the more memory you need. On top of this comes a constant 2MB of memory necessary per active connection (either direction).
So if I assume 50 incoming streams of 2mbps and 600 viewers I will need: 12.5×2×50 = 725 MB + 2×650 = 2025MB. So roughly 2GB, now I would recommend a safety margin of at least 10% so going with at least 2.2GB would be wise.
Bandwidth is often the main bottleneck when it comes to streaming media, especially when higher qualities like 4K are used. Bandwidth is simply the amount of traffic your server can handle before the network connection is saturated. Once your network gets saturated it will mean users will have to wait for their data which often leads to a very bad viewer experience when it comes to media. So it is definitely one of the main things to avoid and thus necessary to calculate what you can handle.
Luckily this is quite easy to calculate, all you need is to know the stream quality and multiply this by every connection (both incoming and outgoing) for every stream you have or plan to have and add it together. Do note that even if a stream quality or stream itself is not viewed the incoming connection will still use up network bandwidth if it is pushed from an outside source, so do not neglect those streams.
For example if I got 6 streams, one of 1mbps, two of 2mbps and 2 of 5mbps with 50 viewers on the 1mbps, 300 viewers on 2mbps and 150 viewers on 5mbps I will need to be able to handle: 1mbps×(50+1)+2mbps×(300+2)+5mbps×(150+3) = 1568mbps. As you can see especially higher quality streams can cause this to raise rather fast, which is usually why a CDN or load balancer is used.
Storage is usually more easily understood, all you need is enough space to fit whatever streams you want to provide on demand or record. Especially with the price of storage compared to the other hardware requirements people tend to go a bit too far with their storage. It cannot hurt to have more though.
Storage is easily calculated, all you need to do is multiply the stream quality by the duration for every stream you have. The only thing you will want to pay attention to is that stream qualities are measured in bits while storage is measured in bytes. There are 8 bits in a byte, so the storage necessary is 8 times less than the bandwidth×duration.
Following the example of bandwidth if I got the same 6 streams, one of 1mbps and two of 2mbps and 5mbps and would want to record those all for 20 minutes I would need: (1×20×60+2×2×20×60+5×2×20×60) / 8 = 2250MB. So you would need a little over 2GB.
Calculate your own hardware requirements
Below are a few calculators that you can fill in to calculate your own bandwidth requirements, or server capacity for your current system and again they are meant for calculating your hardware requirements with MistServer so do not expect the same requirements with other media servers.
Formulas to manually calculate
If you'd rather calculate by hand that's possible too. Just use the following formulas:
- CPU: 3.4×connections(viewers + incoming streams) = necessary cpubenchmark score
- Memory: 12.5×Stream_mbps×Streams_total + 2*Viewers = MB RAM
- Bandwidth: Average_stream_quality_mbps×(Input_streams + Viewers) = Bandwidth in Mbit
- Storage: Total_duration_of_recordings×Average_stream_quality_mbps / 8 = Storage in MByte
As a reminder the steps between kilo, mega and giga are 1024 not 1000 when we're measuring bits or bytes. So make sure you use 1024 when changing between the values or you will have a calculation error.
Well that was it for this blog post, I hope it helped you understand what kind of hardware you will need to search for when using MistServer. Our next blogpost will be done by Erik and will handle stream encryption.
[News] Introducing subscriptions
Hello everyone! We're happy to announce our shop now supports subscriptions.
When you select the subscription payment plan, you'll be charged for the first month. Then, when the month is up, a new license will automatically be created and paid for.
This means you will no longer have to worry about your licenses expiring.
You can cancel your subscription at any time. Your license will remain valid until its end date, but it will not be renewed and you will not be charged again.
If you currently have an active license and would like to subscribe from now on, you can click the Renew button from your Invoices overview and select the subscription payment plan. That will configure the cart to be filled with the settings of your previous order, starting at its end date.
If you have any questions or encounter any problems, please feel free to contact us.
[Blog] Setting up Analytics through Prometheus and Grafana
Hey Everyone! Balder here, this time I wanted to talk about using Prometheus and Grafana to set up analytics collection within MistServer. There’s actually quite a lot of statistics available and while we do tend to help our Enterprise customers to set this up it’s actually available for our non-commercial users as well and easily set up too.
Best practises for setting up your analytics server
As you might have guessed using Prometheus and Grafana will require some resources and we recommend running it on a different device than you are running MistServer on. This is for a few reasons, but the most important being that you would want your analytics collection to keep going if your MistServer instance goes dark for some reason or has run into trouble.
As such we would recommend setting up a server whose sole focus is to get the analytics from your MistServer instances. It can be any kind of server, just make sure it has access to all your MistServer instances.
Operating system choice
While this can be run in almost every operating system, a clear winner is Linux.
Under Linux both Prometheus and Grafana work with little effort and will become available as a service with their default installs. Mac comes in second as Prometheus works without too much trouble, but Grafana requires you to use the homebrew package manager for MacOS. Windows comes in last as I couldn’t get the binaries to work without a Linux simulator like Cygwin.
Installing Prometheus and Grafana
Installing both Prometheus and Grafana under linux is quite easy as they're both quite popular. There's a good chance they're both available as a standard package immediately. If not I recommend checking their websites to see how the installation would go for your Linux Operating System of choice.
Starting them once installed is done through your service system which is either:
systemctl start grafana.service
service grafana start
depending on your Operating system.
Installing Prometheus can be done rather easy. The website provides Darwin binaries that should work on your Mac. It can also be installed through Homebrew which we will be using for Grafana. Which method you use is up to you, but I prefer to work with the binaries as it made using the configuration file easier for me.
Install Homebrew as instructed on their website.
Then use the following commands in a terminal:
brew update brew install prometheus
Installing it as a service would be preferred, but I would recommend leaving that until after you've set everything up.
For Prometheus you will have to make your own service to have it automatically start on boot. Installing Grafana through Homebrew will make it available as a service through Homebrew.
Because of the added difficulty here I would just run them both in a Cygwin terminal and be done with it, though you could try to run them as a system service. The combination of Cygwin and Windows Services tend to cause odd behaviour however, so I can't exactly recommend it.
Setting up Prometheus and Grafana
01: Editing the Prometheus settings file
This is done by editing prometheus.yml, which may be stored on various locations. You will either find it in the folder you've unpacked, or when installed in Linux, at
You need to add the following to the scrape_configs:
scrape_configs: - job_name: 'mist' scrape_interval: 10s scrape_timeout: 10s metrics_path: '/PASSPHRASE' static_configs: - targets: ['HOST:4242']
To add multiple MistServers just keep adding targets with their respective
An example minimal prometheus.yml would be:
scrape_configs: - job_name: 'mist' scrape_interval: 10s scrape_timeout: 10s metrics_path: '/PASSPHRASE' static_configs: - targets: ['HOST01:4242', 'HOST02:4242', 'HOST03:4242']
We did notice that if there's a bad connection between your analytics server and a MistServer instance the
scrape_timeout of 10 seconds could be too short and no data will be received. Setting a higher value for the scrape time could help in this scenario.
You can check if this all worked by checking out http://HOST:9090 at the machine you've set this up after you've started Prometheus. Within the Prometheus interface at
Targetsyou can inspect whether Prometheus can find all the MistServer instances you've included in your settings.
02: Starting Prometheus
systemctl start prometheus.service
service prometheus start
Use a terminal to go to the folder where you have unpacked Prometheus and use:
Use a terminal and browse to the folder where you have unpacked Prometheus. Then use:
Use a command window to browse to the folder where you have unpacked Prometheus. Then use:
03: Setting up Grafana
Through your installation method Grafana should be active and available as a service, or if you are using Windows you will need to boot Grafana by starting
Once active Grafana will have an interface available at
http://HOST:3000 by default. Open this in a browser and get started on setting up Grafana.
Adding a data source
The next step is to add a data source. As we're running Grafana and Prometheus in the same location, this is quite easy. All we need to set is the
URL all other settings will be fine by default.
Namecan be anything you'd want.
Typehas to be set to:
URLwill be the location of the Prometheus interface:
Add those and you're ready for the next step.
Adding the dashboard
We've got a few Dashboards available immediately which should give the most basic things you'd want. You can add a dashboard by following these steps:
Click on the grafana icon in the top left corner → hover
Dashboards → SelectImport`.
You should see the following
Fill in the Grafana.com Dashboard number with our preset dashboards (for example our MistServer Vitals:
If recognised you will see the following
Just add that and you should have your first basic dashboard. Our other dashboards can be added in the same manner. More information about what each dashboard is for can be found below.
MistServer provided dashboards
All of the dashboards can be found here on Grafana Labs as well.
This is our most basic overview which includes pretty much all of the statistics you should want to see anyway. It covers how your server is doing resource and bandwidth wise.
You switch between given MistServers at the top of given panels by clicking and selecting the server you want to inspect.
MistServer Stream Details:
This shows generic details per active stream. Streams and Servers are selected at the top of the panel. You'll be able to see the amount of viewers, total bandwidth use and amount of log messages generated by the stream.
MistServer All Streams Details:
This shows the same details as the
MistServer Stream Details Dashboard, but for all streams at the same time. This can be quite a lot of data, and will become unusable if you have a lot of streams. If you have a low amount of streams per server this gives an easy to use overview however.
Well that's it for this blogpost, I hope it's enough to get most of you started on using Prometheus and Grafana in combination with MistServer.
[Blog] Metadata format
Hello readers, this is Erik, and today we are going to be diving in-depth into some important updates we have been making to our internal metadata systems and communication handling.
Over the last couple of years "low latency" streaming has become more and more important, with viewers no longer accepting long buffering times or being delayed in their stream in any way. To achieve this all processes in your ecosystem will need to be able to work with the lowest latency possible, and having a media server that aids in this aspect is a large step in the right direction.
With this in mind we have been working on creating a new internal format for storing metadata, that allows multiple processes to read while having a single source process generate and register the incoming data. By doing this directly in memory we can now bring our internal latency down to 2 frames direct throughput, and this post is an overview of how we do this.
Communication in a modular system
Because MistServer is a multi-process environment - a separate binary is started for each and every viewer - efficiency is mostly dependent on the amount of overhead induced by the communication between the various processes. Our very first version used a connection between each output and a corresponding input, which has been replaced a couple of years ago by a technique called shared memory.
Shared memory is a technique where multiple processes - distributed over any number of executables - can access the same block of memory. By using this technique to distribute data, all source processes need to only write their data once, allowing any output process to read it simultaneously.
The main delaying factor in its current implementation, is that the metadata for a live stream only gets written to memory every second. As all output processes read once per second as well, this yields a communication delay of up to 2 seconds.
For our live streams we also have the additional use case where multiple source processes can be used for a single stream in order to generate a multi bitrate output. All source processes write their own data to memory pages and a separate
MistInBuffer process handles the communication, authorization and negotiation of all tracks. Next to this it will make sure DVR constraints are met, and inactive tracks get removed from the stream.
During this it will parse the data written to a page by a source process, only to regenerate the metadata that was already available in the source to begin with. This in itself adds a delay as well, and moreover it demands processing power to recalculate information that was already known.
To make matters worse, in order to maintain an up to date view on all data, all executables involved in this system will need to 'lock' the metadata page in its entirety to make sure it is the only process with access. Though the duration of this lock is generally measured in fractions of milliseconds, having a stream with hundreds or thousands of viewers at the same time does put a strain on keeping output in realtime.
For the last couple of months we have been busy with a rework of this structure to improve our metadata handling. By using the new
RelAccX structure we can generate a lock-less system based on records with fixed-size data fields.
If the field sizes do need to be changed a reload of the entire page can be forced to reset the header fields. By doing so all processes that reload the structure afterwards will be working with the new values, as these are stored in a structured header. This also allows us to add fields and maintain consistency going forward.
By using the above described structure we can assign a single page per incoming track, and make the source process write its updates to the same structure that is immediately available for all other processes as well. By setting the 'available' flag after writing all data, we can make sure that the data on the page matches the generated metadata. By doing this we have measured a latency of 2 frames end-to-end in our test environment.
In the same way we can set a record to 'unavailable' to indicate that while the corresponding data might actually still exist at the moment, it is considered an unstable part of the stream and will be removed from the buffer in the near future.
Besides having implemented this technique for the metadata, we have also upgraded the stability and speed of our internal communications. The key advantages here are that our statistics API can now give even more accurate data, and that output processes can now select any number of tracks from a source up from the previous limit of 10 simultaneous tracks - yes we have had customers reaching this limitation with their projects.
By updating the way we have structured our internal communications, we have been able to nearly remove all latency from the system, as well as attaining a reduced resource usage due to not having to recalculate 'known' data. This system will be added in our next release, requiring a full restart of the system. If you have any question on how to handle the downtime generated by this, or about the new way we handle things, feel free to contact us