<?xml version="1.0" encoding="UTF-8" ?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>MistServer streaming media blog</title><link>http://mistserver.org</link><description>Blog posts by MistServer's engineering team on various streaming media topics</description><language>en-US</language><atom:link href="https://news.mistserver.org/blog" rel="self" type="application/rss+xml" /><item>
  <title>How-to Encode using Matroska In/Out with FFmpeg</title>
  <description><![CDATA[Hello everyone!

We've added a new guide on how to add encoding to a stream using stream processes and FFmpeg. We're planning on adding a Gstreamer version as well as we can't be playing favorites.

Encoding is something that is often requested and hopefully this guide will help explaining how to set it up within MistServer. Just keep in mind that your encoding possibilities very much depend on your own server resources!

Link to the guide here

If you have any questions or wonder how to set something up specifically feel free to contact us
]]></description>
  <link>https://news.mistserver.org/news/139/How-to+Encode+using+Matroska+In%2FOut+with+FFmpeg</link>
  <guid>https://news.mistserver.org/news/139/How-to+Encode+using+Matroska+In%2FOut+with+FFmpeg</guid>
  <pubDate>Mon, 23 Sep 2024 18:13:48 +0100</pubDate>
</item>
<item>
  <title>Pushing WebRTC WHIP into MistServer using OBS</title>
  <description><![CDATA[Hey everyone,

We've been quite busy with setting up the 3.4 release, now that we're done we'll start releasing more articles and how to guides. The first one up is how to push WebRTC WHIP into MistServer using OBS. It's already available in our online documentation right here.
]]></description>
  <link>https://news.mistserver.org/news/136/Pushing+WebRTC+WHIP+into+MistServer+using+OBS</link>
  <guid>https://news.mistserver.org/news/136/Pushing+WebRTC+WHIP+into+MistServer+using+OBS</guid>
  <pubDate>Thu, 01 Aug 2024 14:09:50 +0100</pubDate>
</item>
<item>
  <title>Deepdive into using RIST</title>
  <description><![CDATA[What is the goal of this article

RIST (Reliable Internet Stream Transport) is an error correction protocol that ensures accurate transport of media streams across the public Internet. It is still a fairly recent standard, but now mature, and in daily use by media streamers even at the broadcast network level. 

This article will explain the mechanics of RIST and give you some best practices/tips to get started. There is a widely available  FOSS (Free and Open Source Software) implementation of RIST, called libRIST, and we shall refer to it here.

Do note that this will be written from a usage standpoint to keep things as easy accesible as possible, and in particular, to help the MistServer user implement this error correction protocol. Purely technical reasons as to why technique “X” would make something better than “Y” is a deeper dive than intended for this article.

What is the RIST standard?

A big benefit to RIST is that several companies, organized through the Video Services Foundation, maintain the spec. Some of the most noted experts in packet recovery contributed their knowledge and helped design the standard. One of the design goals is interoperability. A RIST implementation by “Company 1” is meant to be interoperable with another implemenation by “Company 2.” 

When would you use RIST?

The biggest reason to use RIST is to move stream data over a lossy packet network; you know, like the Internet! Lab testing shows accurate playback across very extreme conditions, such as 50% packet loss. What you call a “bad hair” day over your Internet connection is probably a 1% loss. In fact, even the best corporate connections drop at least a few packets every hour, so an error correction protocol for important streams can benefit every user.

Right now, most RIST streams run between RIST servers, for transport from a source encoder to a media server which then distributes the stream in some other format such as HLS/DASH. In fact, end-user players such as VideoLAN’s vlc can already directly play a RIST stream! So if you’ve decided the time to familiarize yourself with RIST is now, you may be making exactly the right move!

Quality and reliability

RIST has the first obvious use case to deal with "bad network conditions". So when your connection from source to server or even between servers is expected to be weak you'll most likely be interested in RIST. Simply put, RIST saveguards the stream quality that goes through the connection and provides UDP transport that extra reliability layer that it's otherwise missing.

Latency

Simply putting an extra layer on top of the bits for delivery can adds latency, so surely RIST is added delay? Yes. It’s a trade-off.

Let’s explain how RIST works. The server receives a packet of media from the source, adds a sequence number and/or time stamp, and sends it off to the client. The server places a copy of the packet in a buffer. The client receives the packet… and the next packet, and the next, and so forth. But the client doesn’t immediately release each packet. The packets go into a buffer.

By examining the sequence numbers of the packets in the buffer, the client can send a re-request to the server if it finds a packet is missing or corrupted. The server then resends that packet. Once the client knows it has all the packets, it can then release the video stream to wherever it’s supposed to go.

As you can see, this means that you’ll want a buffer of at least three times the duration that it takes a packet to travel from server to client, or vice versa, plus a small amount to time for handling, or to wait and see if the original packet will arrive out of order, which sometimes happens on the Internet. If it’s a good RIST implementation, then the software will intermittently measure the “ping” time between server and client, and adjust the buffer time to be as short as possible, while also being efficient. Note also that RIST allows for time stamps (not all media formats include them). This allows for the client at the destination side to emit the stream at exactly the same speed that it was received at the source side by the server. The end result should be no (or very, very few) dropped packets, and no jitter. You may have read about humorous implentations of IP networks using carrier pigeons to transport packets; RIST would theoretically carry video over pigeon-Internet, though it might require a lot of tired pigeons!

The bottom line for a media provider is that at the cost of a latency of, for example, half-a-second to a few seconds, your streams will be flawlessly transported.

Security

RIST does allow you to make a secure AES-encrypted tunnel from point-to-point distribution, or even point-to-many-point distribution, using either pre-shared key, rotating-keys and an optional authorization protocol based upon existing protocols, and currently submitted as an Internet RFC (Request for Comment).

What does RIST do differently compared to SRT?

The current leader in the field of error correction transport is Haivision SRT (Secure Reliable Transport). In fact, SRT and RIST provide similar functions. Some of the companies and engineers who designed RIST also sell implementations of SRT. But you could point to some differences in approach:


SRT tends to focus on professional installations. Engineers at a given location expect a known or constant latency between source and destination, and that is usually the starting point in the configuration of the link. Note that while there is a free implementation of SRT, large customers generally use non-free versions.
RIST, depending upon the implementation, can dynamically set and modify the buffer in accordance with changes in network conditions. This sometimes means that from start of receiving the stream to viewing it may take longer. The buffer’s size is being determined and the buffer filled. It also, generally, means a lesser overhead (in CPU processing and average bandwidth per connection).


Note also that while there are paid versions available, a free version, libRIST, is already extensively used by at least one of the big three U.S. broadcast networks. libRIST also provides a very flexible license so that developers can easily incorporate it into their own, free or non-free projects.

RIST and MistServer

There’s one very big reason to choose RIST when using MistServer: RIST is able to provide a buffer window, which is incompatible with most other applications. MistServer, however, can handle this just fine. We’ll obviously have to document this for you elsewhere!
The benefit is that when you use RIST with MistServer, your latency/quality balance can actually adapt to your current connection. Most connections only do this at the start if at all.

What are the RIST profiles

Profiles are specified “flavors” of RIST. Each successive profile implements the features of the previous and adds additional features beyond the previous.

Simple (Released in October 2018)

The first and most common implementation of RIST. The core of the Simple profile is to just get the error corrected transport going and optimize for quality/latency. Note that RIST Simple profile require two ports, with the first an even number. Encryption is supported externally (DTLS). Stream types must be RTP or UDP.

Main (Released in March 2020)

The second profile specification released. It adds support for encryption with AES, has an optional authorization process, and  supports point-to-multipoint services (or vice versa). It also supports multi-path routing, allowing for load balanced (for redundancy) or split (for speed) paths using two Internet service providers on the source side. It provides for GRE tunneling, thus supporting multi-plexing of streams and compatibility for any type of stream format. It also makes it easier for different sides to “initiate” the connection vs. the start-of-stream. This makes it very useful when one side or the other is behind a firewall.

Advanced (Released in October 2021)

This profile supports additional tunneling capabilities (effectively providing VPN-like services) and realtime LZO compression (which is surprisingly effective for very high density streams, or for WebRTC)!

When do you choose which profile?

In testing start with Simple profile and work your way up.
In production, if you’re not too concerned with security, then Simple will do. It’s more likely, however, you’ll use Main profile. 

Usage quickstart guide

If you start with libRIST, you may wish to first try a libRIST to libRIST connection. There is a step-by-step guide with command line examples and explanations.

The rest of this article’s quickstart will focus upon  MistServer and libRIST. MistServer (3.0 and higher) incorporates libRIST code for their RIST implementation.

Note that RIST implementation in Mistserver up to version 3.2 requires compiling your own version of MistServer. RIST is fully included in 3.2 and every release after.

All you need to do to use rist is provide MistServer a source or push target starting with rist://.  libRIST’s quick start and documentation explain the URLs thoroughly! A rist:// URL can incorporate multiple sources or targets as well as multiple streams, each with different parameters! Start small, and work your way up!

Another thing to keep in mind is that the stream buffer within MistServer also determines how much should be sent to the other side when connection is made. So keeping that low or high depending on your goal may help as well.

Understanding the RIST defaults

The default RIST values for MistServer are compatible with  libRIST and will work well for most networks and other application targets by default. We always recommend changing settings to fit your needs, but the defaults are a great starting point.

The defaults are made to work with networks fitting within:


Buffer sizes from 50 ms to 30 seconds
Networks with round trip times from 0ms to 5000ms
Bitrates from 0 to 1 Gbps
Packet size should be kept under the path's MTU (typically 1500). The library does not support packet fragmentation.
Bi-directional networks (not one-way satellite systems, though Advanced profile and other updates to the RIST specification are in progress).


Usage

When used with MistServer, RIST creates the connection between two points. An input must connect with an output. In order to match the connection an address and port is needed.
You will sometimes see a rist:// URL in which an @ character precedes an IP address/port (as in @192.168.1.1:12345; IPv6 is also supported by the way). This applies to an IP on your host, and signifies that your host is to be in listening mode: it waits for the other side to contact it to establish the connection. Without an @ character, it means, just start sending to (or receiving from) at that address/port.

MistServer using a RIST stream as input

The MistServer input will listen at an address/port (or on all interfaces through 0.0.0.0 if not set). This assumes the source sends to the specified input address. Note that multicast is supported, with the device name being a parameter.

Syntax:

rist://@(address):port(?parameter1=value&amp;parameter2=value...)


@: This signifies that the MistServer host will wait for an incoming RIST connection.
address: Optional for input side, if given it will listen on that specific address, if unset it will listen on all addresses.
port: UDP port to use
parameters: Additional options you can set, see below for all available parameters.


MistServer sending a RIST stream to a location

The MistServer will reach out to the given address and start sending a stream towards it using RIST transport. It assumes the other side listens for it.

Syntax:
rist://adderss:port(?parameter1=value&amp;parameter2=value...)


address: Must be set for output side, the address to connect towards.
port: UDP port to use
parameters: Additional options you can set, see below for all available parameters


Common pitfalls/mistakes

Mismatched Parameters

The most common mistake is mismatched parameters. If, for example, the sender encrypts at AES 128, and the receiver at AES 252, transport will not be possible.

Simple Protocol Port Assignments

As mentioned previously, Simple Protocol uses two ports, and the first one must be an even number. Currently MistServer will enforce you to use even ports just in case. So an error message that you should pick an even port will come up regardless of profile!

Routing and Firewalls

Where possible use ping and tracert to verify each side and reach the other. You can also search the web for articles regarding UDP pings – they can be done using nmap or netcat, and can help you verify that a path to the precise port you wish to use is open.

Parameters aren’t working

Note that libRIST has a verbosity setting which is extremely useful in debugging what might be the cause.

How do You “Tune” Your Connection for:

Latency and Efficiency

You can trust the latest versions of libRIST and MistServer to negotiate and auto-configure the best latency values for efficiency and reliability. But we encourage you, especially if you’re doing this for the first time, to understand manual setup as well.

Also, when you expect major transport problems, you may wish to “tune” your settings for a specific connection. Or if you’re MistServer installation is not quite up-to date, you may also wish to manually configure.

You’ll find that MistServer works best with a dynamic buffer size for its RIST connection. The buffer-min and buffer-max. Parameters set the absolute smallest and largest amounts in milliseconds that your buffer shall use. The example below sets very small and large values.

Example:

rist://address:port?buffer-min=10&amp;buffer-max=1000

When RIST starts, it UDP-pings the other side to measure the round trip time. RIST then sets the initial buffer to six times the RTT whenever you manually set the buffer-min and buffer-max. Thereafter, by monitoring pings between the server and client, RIST can shrink or enlarge the buffer to the min or max, based upon network conditions. The “viewer” of the stream sees a constant playback because RIST has previously enlarged the buffer at the first sign of network problems.

RTT-min and RTT-max are related parameters. Their primary use is to “signal” to the RIST algoritms that there may be either a very good network or very bad network condition ahead, based upon the difference between the min and max. In testing, ping the other side about 100 times (ping -c 100), or at different times of day. Make a note of the minimum and maximum values in milliseconds, and substitute the values in the example below:

Example:

rist://address:port?buffer-min=10&amp;buffer-max=1000&amp;rtt-min=rtt_min_from_ping&amp;rtt-max=8xrtt_max_from_ping

This would tell libRIST to calculate the lowest latency and best quality sweet spot between 10ms and 1000ms and keep doing this for as long as the connection is busy. RIST will shrink or enlarge the buffer as network conditions change.

In general, you’ll use buffer-min and max when you’re guessing at network quality, and rtt-min and max when you can actually test the network quality.
Another thing you can do is change the MistServer default buffer size from 50000 to a lower amount, 10000 for example. Do note that MistServer will make this buffer higher if it’s needed to output certain protocols like HLS.

Note: MistServer RIST defaults

MistServer, its built in RIST transport defaults to:


buffer-min: 1000ms
buffer-max: 1000ms
rtt-min: 50
rtt-max: 500


You’ll be using the defaults automatically by simply by filling in the bare minimum:

rist://(@)address:port

URL Parameter List

Note that if you use libRIST, you may see help text with rist-sender --help, ristsender –help-url, plus the same for ristreceiver. We might also note here that if you have multiple MistServers set up at multiple points-of-presence, libRIST includes a third binary (rist2rist) which acts a “switching point” that can “distribute” from your original source to multiple RIST receivers without having to decode/re-encode the stream.

For every parameter below we’ll have it set with its default as used in MistServer:

Simple, Main and Advanced profile

buffer=1000 

Default 1000ms. (both min/max), the buffer by which stream data could be delayed between in and out (jitter is compensated for).

bandwidth=100000

Default 100000 Kbps. Sets the maximum bandwidth in Kbps for the connection. Measure your stream bitrate and recommended to use 10% higher for a constant bitrate, 100% higher for variable bitrate.

return-bandwidth=0
Default 0 Kbps (no limit). Sets the maximum bandwidth from receiver to sender (communication of re-requests, pings, metrics) in Kbps.

reorder-buffer=25

Default 25ms. Sets a secondary buffer in ms to reorder any out-of-order packets. Usually not necessary to change.

rtt=0 

Default (not set). The RTT (in ms) determines the time between requests and makes sure the spacing between them fits the network conditions. Setting this value sets both the rtt-min/max to the same value, which is not recommended.

cname=

Default 0 (not set). Arbitrary name for stream for display in logging. For MistServer the stream name will be copied in here if possible.

weight=5

Default 5. Relative share for load balanced connections, distribution paths will be determined by the weight in comparison to other weights. Use 0 for multi-path.

buffer-min=1000 

By default equal to buffer in ms. Determines the shortest the buffer is allowed to be.

buffer-max=1000 

By default equal to buffer in ms. Determines the longest the buffer is allowed to be.

rtt-min=50 

Default 50ms, the RTT will not search below this value. Will set itself to 3ms if you try to set it lower.

rtt-max=500 

Default 500ms, the RTT will not search above this value. Will overwrite rtt-min.

timing-mode=0

0 = RTP Timestamp (default); 1 = Arrival Time, 2 = RTP/RTCP Timestamp+NTP.

The RIST specification does not mandate time synchronization. Using this parameter, librist shall attempt to release the packets according to the time stamp indicated by the option specified. When not set, it emits the media packets at a speed equal to that at which the packet was received at the sender side plus the time buffered. Note that the Network Time Protocol option is designed so that you can synchronize playback of multiplexed streams using ntp plus the buffer size as a guide. The allowed values are1, for Arrival Time and 2, for RTP/RTCP Timestamp plus NTP. Note that this different than the RTP-timestamp=# and the RTP-sequence=# URL parameters in that the latter two will not attempt to synchronize the release of the packets to the player.

virt-dst-port=1968

Default 1968. the port within the GRE (Generic Routing Encapsulation) tunnel. This has nothing to do with the media port(s). Assume the GRE is device /dev/tun11, having an address of 1.1.1.2, and you set the virtual destination port to 10000 and your media is using port 8193/4. The operating system will use 1.1.1.2:10000 for the GRE. As far as your media source and media player are concerned, the media is on ports 8193/4 on their respective interfaces. The media knows nothing of the tunnel.

profile=1

0 = Simple, 1 = Main, 2 = Advanced

Default main.
Rist profile in use.

verbose-level=6

Disable -1; Error 3, Warning 4, Notice 5, Info 6, Debug 7, simulation/dry-run 100

Default level is 6.
Allows you to set the verbosity of the RIST protocol, 100 is used to do a simulation/dry run of the connection.
High and Advanced only

High and Advanced only

aes-type=#  

128 = AES-128,
256 = AES-256,

Specifies the specific encrytion. Specify “128” for AES-128 or “256” for AES-256.
Remember that you must also specify the pass phrase. Both sides must match the passphrase (“secret” parameter).

secret=  

The encryption passphrase that must match on both sides.
Requires an aes-type to be set.

session-timeout=2000  

Default 2000 in ms. terminates the RIST connection after inactivity/lack of keepalive response for the limit (in milliseconds) which you set with this parameter.

keepalive-interval=1000  

Default 1000 in ms. Time in milliseconds between pings. As is standard practice for GRE tunnels, the keep alive helps ensure the tunnel remains connected and open should no media be traversing it at a given time.

key-rotation=1  

Default 1ms. sets the key rotation period in milliseconds when aes and a passphrases are specified.

congestion-control=1  

Default 1.

mitigation mode: (0=disabled, 1=normal, 2=aggressive)

libRIST provides built in congestion control, which is important in situations in which a sender drops off the connection, but the receiver still sends re-requests. The three options for this parameter are 0=disabled, 1=normal and 2=aggressive. In general, don’t set the parameter to “aggressive” unless you’ve definitely established that congestion is a problem.

min-retries=6  

Default 6.
sets a minimum number of re-requests for a lost packet before congestion control kicks in.
Note that setting this too high can lead to congestion. Regardless of this setting, the size of the buffer and the roundtrip time will render too high a minimum value here irrelevant.

max-retries=20  

Default 20.
sets a maximum number of re-requests for a lost packet.

weight=5  

Default 5
Relative weight for multi-path load balancing. Use 0 for duplicate paths.

username=  

this corresponds to the srp-auth credentials defined (globally) on the “other” side, when the “other” side is in listen mode with an srp-auth file holding the corresponding credentials. Note that libRIST includes a password utility. If you’re familiar with Apache’s htpasswd, it works just like that.

password=  

this corresponds to the srp-auth credentials defined (globally) on the “other” side, when the “other” side is in listen mode with an srp-auth file holding the corresponding credentials.

multiplex-mode=-1  

Controls how rist payload is muxed/demuxed (-1=auto-detect, 0=rist/raw, 1=vrtsrcport, 2=ipv4)

multiplex-filter=#  

When using mux-mode=ipv4, this is the string to be used for data filter. It should be written as destination IP:PORT

Advanced Profile only

compression=1  

1 for enable, 0 for disable

enable lz4 levels
Usage: append to end of individual udp:// or rtp:// url(s) as ?param1=value1&amp;param2=value2…

miface=

The device name to multicast on (e.g. eth0), if unset uses the system default.

stream-id=

ID number (arbitrary) for multiplex/demultiplexing steam in peer connector.

rtp-timestamp=0  

carry over the timestamp to/from the RTP header into/from rist (0 or 1).

rtp-sequence=0  

carry over the sequence number to/from the RTP header into/from rist (0 or 1).

rtp-ptype=# 

override the default RTP PTYPE to this value. RFC 3551 describes the standard types.
]]></description>
  <link>https://news.mistserver.org/news/119/Deepdive+into+using+RIST</link>
  <guid>https://news.mistserver.org/news/119/Deepdive+into+using+RIST</guid>
  <pubDate>Mon, 14 Aug 2023 14:26:25 +0100</pubDate>
</item>
<item>
  <title>Setting up Analytics with VictoriaMetrics and Grafana</title>
  <description><![CDATA[In this post we will cover installing VictoriaMetrics and Grafana and setting them up. We will also cover upgrading your Prometheus setup to Victoriametrics in case you are looking to update.

Why do I want Victoriametrics and Grafana for analytics?

MistServer has a tremendous amount of data available and in use to optimize streaming workflows with the sole intention of running in the moment. MistServer itself does not store this data anywhere as it would quickly flood your server storage. Data collection applications such as VictoriaMetrics are made to specifically counter this problem and are capable of scraping (collecting) the data and storing it with superior compression.
Grafana in turn is an application made to make data like this what we call "human friendly". Providing graphs, bars and other visual representations of the data easy to understand.

Best practises for setting up your analytics server

As you might have guessed using VictoriaMetrics and Grafana will require some resources therefore we recommend running it on a different device than you are running MistServer on. This is for a few reasons, but the most important being that you would want your analytics collection to keep going if your MistServer instance goes dark for some reason or has run into trouble.

As such we would recommend setting up a server whose sole focus is to get the analytics from your MistServer instances. It can be any kind of server, just make sure it has access to all your MistServer instances.

Requirements


One or more running MistServer instance(s)
A server with connection to all MistServer instances you want to capture analytics from


The steps to follow


First we will choose an operating system
Then we will install VictoriaMetrics &amp; Grafana
Afterwards we will set up Grafana
Lastly we will show some default MistServer dashboards for Grafana as a template


01 OS Selection

While VictoriaMetrics is able to run in nearly any OS and has docker images available as well our personal preference goes to running it on a dedicated Linux server. Linux allows us to throw a very minimal OS on the server and fully dedicated it to the task of data collection. Other OS tend to add unnecessary features and complicate usage.

If you do want to use a different OS please feel free to do so, with the exception of the actual installation process the rest of this guide should still help you set everything up.

02 Installing VictoriaMetrics and Grafana

VictoriaMetrics

VictoriaMetrics is wide-spread since its release in 2018. By now it's almost certainly in the default package managers for your Linux distro. If for some reason it is not we would recommend manually installing it using their available downloads.

Setting up VictoriaMetrics

Now before we start, obviously we cannot cover every setting available within VictoriaMetrics. We'll give something to work with. We would always recommend reading up on the applications you're using and determining what settings you want to use yourself as well.

We would recommend some minor tweaks to the boot arguments. The default retention period is 1 month. We will want to edit it to something longer, let's say 120 months.
Adding the following to the arguments list will do the trick:
-retentionPeriod 120

Now depending on how VictoriaMetrics is installed the arguments could be directly in your service script or link to an EnvironmentFile. 

Another thing to consider is to change the path where the data is stored. While not strictly necessary we usually change it towards:
-storageDataPath /var/lib/victoriametrics/

Make sure that the user VictoriaMetrics runs as exists and has access to this folder.

Updating your Prometheus setup to VictoriaMetrics

If you're updating from Prometheus to VictoriaMetrics you can follow the set up above, but also need to do two more things.
- Load in the existing prometheus.yml settings
- Import the existing prometheus data

Loading in the existing prometheus.yml settings

The flag -promscrape.config handles this, however your existing prometheus.yml is most likely not compatible with VictoriaMetrics. Therefore we recommend the following:
- Copy your prometheus.yml to /etc/victoria.yml
- Edit the new victoria.yml to be compatible with VictoriaMetrics
- set the promscrape.config to victoria.yml

You can copy your current prometheus.yml with the following command:
cp /path/to/prometheus.yml /etc/victoria.yml

If you do not know where your prometheus.yml is run the following:
ps aux|grep prometheus

Within the output you should see a config file argument like:
--config.file /etc/prometheus.yml

That is where your prometheus.yml is located. 

We will now edit this file, grab your preferred text editor and we'll start making it compatible. As bare minimal you only need the scrape config. So unless you absolutely want to keep a setting I would delete everything but the scrape config.

A minimal victoria.yml would look like this:
```
scrapeconfigs:
  # The job name is added as a label job=&lt;job_name&gt; to any timeseries scraped from this config.
  - jobname: "mist"
    scrapeinterval: 10s
    scrapetimeout: 10s
    metricspath: '/PROMETHEUSPASSPHRASE'
    static_configs:
      - targets: ['SERVER01:4242', 'SERVER02:4242']

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.


```

Now go back to your VictoriaMetrics service script or EnvironmentFile and add the following line to the arguments:

-promscrape.config /etc/victoria.yml

Importing the existing prometheus data

We can now start VictoriaMetrics. If you've skipped the original set up for VictoriaMetrics keep in mind that you want to at least add -retentionPeriod 120 to the argument list. This makes the retention period 120 months instead of the default of 1 month.

Let's boot up VictoriaMetrics with the correct configurations

systemctl daemon-reload
systemctl restart victoriametrics

Take a coffee break of 5-10 minutes so VictoriaMetrics has had time to collect data while Prometheus is still active (to avoid data loss).

Go to your Grafana setup and change the Datasource to go to port 8428 instead of your Prometheus port (9090 by default).

VERIFY THIS IS WORKING
Open one of your Dashboards, set the window to 5 minutes to cover the "new" collection time and see if you're getting the stats you expect.
If you're seeing your dashboards work continue, if you're not getting anything start debugging why. I would start with systemctl status victoriametrics and continue from there.

We will now need vmctl if your VictoriaMetrics installation came without you can find vmctl at the VictoriaMetrics Download page under the vmutils packages. 

Check where the Prometheus storage path is:
ps aux|grep prometheus
Look for the argument:
--storage.tsdb.path /path/to/prometheus/
Write this down, you will need it in a bit.

Close down prometheus:
systemctl stop prometheus
systemctl disable prometheus

and start the import:
vmctl prometheus --prom-snapshot /path/to/prometheus/

Take some time off as now you need to wait for the import to finish. Once it's done open your Dashboard again and you should see all your old data back. Congratulations, you've upgraded succesfully.

Grafana

Grafana is well distributed within the Linux community. You should be able to simply install it as a service using the default installation process for your chosen OS. If for some reason it is not available check for the Grafana website for installation instructions.

03: Setting up Grafana

Through your installation method Grafana should be active and available as a service. If not you can start it simply by booting the executable, but I would look into getting it ran as a service.

Once active Grafana will have an interface available at http://HOST:3000 by default. Open this in a browser and get started on setting up Grafana. 

Adding a data source

The next step is to add a data source. As we're running Grafana and Prometheus in the same location, this is quite easy. All we need to set is the Name, Type and URL all other settings will be fine by default.




Name can be anything you'd want.
Type has to be set to: Prometheus (yes, this is correct.)
URL will be the location of the VictoriaMetrics interface: http://localhost:8428


Add those and you're ready for the next step.

4. Adding the dashboard

We've got a few Dashboards available immediately which should give the most basic things you'd want. You can add a dashboard by following these steps:

Click on the grafana icon in the top left corner → hover Dashboards → Select Import

You should see the following 


Fill in the Grafana.com Dashboard number with our preset dashboards (for example our MistServer Vitals: 1096)

If recognised you will see the following 


Just add that and you should have your first basic dashboard. Our other dashboards can be added in the same manner. More information about what each dashboard is for can be found below.

MistServer provided dashboards

All of the dashboards can be found here on Grafana Labs as well.

MistServer Vitals: 1096



This is our most basic overview which includes pretty much all of the statistics you should want to see anyway. It covers how your server is doing resource and bandwidth wise.

You switch between given MistServers at the top of given panels by clicking and selecting the server you want to inspect.

MistServer Stream Details: 4526



This shows generic details per active stream. Streams and Servers are selected at the top of the panel. You'll be able to see the amount of viewers, total bandwidth use and amount of log messages generated by the stream.

MistServer All Streams Details: 4529



This shows the same details as the MistServer Stream Details Dashboard, but for all streams at the same time. This can be quite a lot of data, and will become unusable if you have a lot of streams. If you have a low amount of streams per server this gives an easy to use overview however.
]]></description>
  <link>https://news.mistserver.org/news/115/Setting+up+Analytics+with+VictoriaMetrics+and+Grafana</link>
  <guid>https://news.mistserver.org/news/115/Setting+up+Analytics+with+VictoriaMetrics+and+Grafana</guid>
  <pubDate>Thu, 01 Jun 2023 16:56:08 +0100</pubDate>
</item>
<item>
  <title>Simple token support for live streams</title>
  <description><![CDATA[Hey everyone,

We've had quite some questions about how to use tokens with live streaming and decided to give another easy example. Do note that this wouldn't be a recommended way for larger platforms, but for something small or quick and dirty this will work and is pretty much done in 5 minutes.
For a longer and proper explanation please look at this post.

What is live streaming with tokens

Live streaming with tokens allows you to give your streamers some security in their RTMP push url while at the same time allowing for a more user friendly stream name for viewers/channels. The default for RTMP is that the stream name returns in the push url. While handy this does open it up for people to "hijack" the stream as the push url is easily guessed.

Using tokens allows you to use complicated stream keys to push, while keeping the stream name within MistServer easy to use &amp; user friendly. To get this done we will be using Triggers and a bash script.

Requirements


MistServer 3.0+ on Linux
Minor knowledge of Linux


Steps we go through in this guide


1 we will have a look at the bash script
2 we will look at the MistServer settings
3 we will be editing the bash script to your streams
4 we will be changing the push URLS for your pushing application for RTMP
5 we'll then look at what changes when we use SRT
Finally we'll look at how this works for a wildcard setup


1. The bash script

#!/bin/bash

#log incoming data for debugging/logging. Uncomment below
#TRIGGERDIRECTORY="./"
#cat &gt;&gt; "${TRIGGERDIRECTORY}push_rewrite.log"

#Collect the trigger payload
DATA=`cat`

#Split up 2nd and 3rd line to IP and KEY given by trigger
IP=`echo "$DATA" | sed -n 2p`
KEY=`echo "$DATA" | sed -n 3p`

#variables to match with IP &amp; KEY.
#PASS = KEY
#ORI = IP
#STREAM = Stream name to redirect towards
function checkStream {
        PASS=$1
        ORI=$2
        STREAM=$3

        if [ "$IP" = "$ORI" ]; then
                if [ "$KEY" = "$PASS" ]; then
                        echo -n "$STREAM"
                        exit 0
                fi
        fi
}

#checkstream fills in $1,$2,$3 with: "password" "IP" "stream_name" and checks info below for matches
#To add streams copy below checkStream and fill in pass/IP/stream_name to allow any IP use $IP
#checkStream "key" "$IP" "streamname"


Save the following text in your folder of preference under your preferred name. I would recommend saving it something easily recognizable like: push_rewrite.sh

After saving it don't forget to make it executable with the terminal command:

chmod +x file


2. MistServer settings

Stream Setting

Live streams will be set up normally, simply make a stream with any stream name and as source use:

push://


It does not matter how many live streams you set up, but do mind that every live stream needs to be set up later on in the bash script.

Trigger setting

This is where the magic happens. We will use a bash script below to rewrite the stream keys to their easier counterparts.

Use the trigger 

PUSH_REWRITE
Handler (URL or executable): /path/to/bash.sh
Blocking: YES
Default response: false


The /path/to/bash.sh will be the path/filename you used when setting up the bash script.

3. Editing the bash script to your streams

Now all that is left will be editing the bash script to work with your live streams.

Generating stream tokens

Now this is something I highly recommend generating yourself, it can be pretty much as long as you like. To avoid annoyances I would also recommend avoiding special characters as some applications might not like those when pushing out an RTMP stream.

For this example I will be using md5sum. Keep in mind that the whole point of the token is that they cannot be easily guessed, so don't just only hash the stream name. Add something that only you know in front and behind the stream name to generate your token.

Example:

md5sum &lt;&lt;&lt; supersecret1_live_supersecret2


This will give me the output:

8f4190132f1b6f1dfa3cf52b6c8ef102


Using the stream name in there will make the token generation different every time and the randomly chosen words before and after will keep it impossible to guess for outsiders. Use a token your comfortable with or you feel is random enough. One could also use a MD5HASH_streamname_MD5HASH as token making it longer.

Editing the bash file

The only thing to add is a line at the bottom of the bash file. The line needs to follow this syntax:

checkStream "key" "$IP" "streamname"


Where:


key is your Stream token
streamname is your set up stream name within MistServer
$IP is either $IP to skip IP verification or the IP you want to white list as the only one able to push towards this stream. Usually tokens are enough, but this is another extra security step you can take.


Using the example above and assuming the stream live within MistServer would be:

checkStream "8f4190132f1b6f1dfa3cf52b6c8ef102" "$IP" "live"


To add streams simply keep adding lines below your last, make sure to use a new &amp; correct key and streamname every time.

4. Editing your pushing application

Instead of pushing towards stream live your pushing application would now require the token as stream key/name.

So you would be using 8f4190132f1b6f1dfa3cf52b6c8ef102 instead. 
Making the full RTMP address to use
rtmp://mistserveraddress/live/8f4190132f1b6f1dfa3cf52b6c8ef102

or

rtmp://mistserveraddress/live/
stream key: 8f4190132f1b6f1dfa3cf52b6c8ef102

Depending on whether your application wants a full url or a partial with stream key.

When pushing towards this url you should see your stream come online, with the shorter live stream name. While anyone trying to push towards MistServer using the stream name instead of the token will be blocked.

5. Using this set up with SRT

Using this set up with SRT will work almost the same way as RTMP. The major differences are that you'll be setting up an incoming SRT port or use MistServers default ports. 

MistServer Configurations

If you're using the defaults of MistServer you can skip this step and your default port will be 8889. 

You will need the SRT protocol active in the protocol panel. Look for TS over SRT. For the port you can choose any valid port you like, we would recommend setting the 'Acceptable connection types' to Allow only incoming connections only as this speeds up the input side of things.

Stream Configurations

No changes here, keep the source input on "push://". This will work for both RTMP and SRT after all!

Trigger Configurations

Nothing changes for the trigger script or the trigger itself. Generate a token for every stream name you'd like to use and when it matches it will be accepted.

Pushing application configurations

In order to push towards MistServer you will have to use a newer syntax for pushing SRT. We're going to include the value streamid. This value tells MistServer what stream should be matched with your push attempt, and is exactly what we need for the token matching.

SRT URL

Your SRT urls would work like this:
srt://mistserveraddress:port?streamid=TOKEN 
You can follow up the url with any other parameters you want to use in SRT. Using the same token as the RTMP example you'd get:
srt://mistserveraddress:port?streamid=8f4190132f1b6f1dfa3cf52b6c8ef102
MistServer will grab the streamid, match it to the streamnames set in the trigger and forward it to the matching stream name.

6. Wildcard setup

A wildcard setup is where you set up a single live stream and use that configuration for all other live streams that will be going to your platform. It's best use is for platforms that have to deal with a significant amount of live streams from users that might be added on the fly. It's a set up that allows you to make one major stream name and add secondary streams using the same setup. It works by placing a plus symbol (+) behind the stream name, followed by any unique text identifier. For example, if you configured a stream called "test" you could broadcast to the stream "test", but also to "test+1" and "test+2" and "test+foobar". All of them will use the configuration of "test", but use separate buffers and have separate on/off states and can be requested as if they are fully separate streams.

Trigger changes

The only change here is that you need to make the new streams by using mainstream+uniquestream where the uniquestream is the new stream name for every new push.

For example: 
  checkStream "8f4190132f1b6f1dfa3cf52b6c8ef102" "$IP" "live+uniquestream1"
The token 8f4190132f1b6f1dfa3cf52b6c8ef102 would then create the stream live+uniquestream1 which saves you the setup of making the stream uniquestream1, just adding new lines to this and creating a uniquestream identifyer after the plus every time will allow you to instantly use it.

Conclusion

That is pretty much it for a simple bash method to stream tokens and live streaming. It all comes down to setting up a trigger and adding every new stream to the bash script and giving them an unique token. Now, just to mention it again: we wouldn't recommend something like this for a bigger setup, but it would get you started for a small server.
]]></description>
  <link>https://news.mistserver.org/news/114/Simple+token+support+for+live+streams</link>
  <guid>https://news.mistserver.org/news/114/Simple+token+support+for+live+streams</guid>
  <pubDate>Tue, 02 May 2023 13:11:11 +0100</pubDate>
</item>
<item>
  <title>MistServer and Secure Reliable Transport (SRT)</title>
  <description><![CDATA[What is Haivision SRT?Secure Reliable Transport, or SRT for short, is a method to send stream data over unreliable network connections. Do note that was originally meant for between server delivery. The main advantage of SRT is that it allows you to push a playable stream over networks that otherwise would not work properly. However, keep in mind that for “perfect connections” it would just add unnecessary latency. So it is mostly something to use when you have to rely on public internet connections.Requirements
MistServer 3.0
Steps in this Article
Understanding how SRT works
How to use SRT as input
How to use SRT as output
How to use SRT over a single port
Known issues
Recommendations and best practices
1. Understanding how SRT worksSRT used to be done through a tool called srt-live-transmit. This executable would be able to take incoming pipes or files and send them out as SRT or listen for SRT connections to then provide the stream through standard out. We kept the usage within MistServer quite similar to this workflow, so if you have used this in the past you might recognize it.Connections in SRTTo establish a SRT connection both sides need to find each other. For SRT that means that one SRT process is listening for incoming connections (Listener mode) and the other side will reach out to an address and call for the data (caller mode).Listener modeIn SRT the listener means the side of the SRT connection that expects to receive the streaming data. By default in SRT this is the side that monitors a port and awaits a connection.Caller modeIn SRT the caller means the side of the SRT connection that sends out the streaming data to the other point. By default in SRT this is the side that establishes the connection with the other side.Rendezvous modeIn SRT Rendezvous mode is meant to adapt to the other side and take the opposite. If a rendezvous connection connects to a SRT listener process it’ll become a caller. While this sounds handy we recommend only using listener and caller mode. That way you’re always sure which side of the connection you are looking at.Don’t confuse listener for an input or caller for an outputAs you might have guessed from the defaults they do not have to apply in all cases. Many people confuse Listener for an input and Caller for an output. It is perfectly valid to have a SRT process listen to a port and send out streaming data to anyone that connects. That means that while it is listening, it is meant to be serving (outputting) data.
In most cases you will use the defaults for listener and caller, but it is important to know that they are not inputs or outputs. They only signify which side reaches out to the other and which side is waiting for someone to reach out.Putting this to practiseThe SRT scheme is as follows:srt://[HOST]:PORT?parameter1&amp;parameter2&amp;parameter3&amp;etc...HOST This is optional. If you do not specify it 0.0.0.0, meaning all available network interfaces will be used
PORT This is required. This is the UDP port to use for the SRT connection.
parameter This is optional, these can be used to set specific settings within the SRT protocol.You can assume the following when using SRT:
Not specifying a host in the scheme will imply listener mode for the connection.
specifying a host in the scheme will imply caller mode for the connection.
You can always overwrite a mode by using the parameter ?mode=caller/listener.
Not setting a host will default the bind to: 0.0.0.0, which uses all all available network interfaces
Some examples
srt://123.456.789.123:9876
This establishes a SRT caller process reaching out to 123.456.789 on port 9876
srt://:9876
This establishes a SRT listener process monitoring UDP port 9876 using all available network interfaces
srt://123.456.789.123:9876?mode=listener
This establishes a SRT listener process using address 123.456.789.123 and UDP port 9876.
srt://:9876?mode=caller
This establishes a SRT caller process using UDP port 9876 on all available interfaces
2. How to use SRT as inputBoth caller/listener inputs can be set up by creating a new stream through the stream panel.SRT LISTENER INPUTSRT listener input means the server starts an SRT process that monitors a port for incoming connections and expects to receive streaming data from the other side. You can set one up using the following syntax as a source:srt://:port
The above starts a stream srt_input with a SRT process monitoring all available network interfaces using UDP port 9876. This means that any address that connects to your server could be used for the other side of the SRT connection. The connection will be succesfull once a SRT caller process connects on any of the addresses the server can be reached on, using UDP port 9876.If you want to have SRT listen on a single address that is possible too, but you will need to add the ?mode=listener parameter.srt://host:port?mode=listener

The above starts a stream srt_input with a SRT process monitoring the address 123.456.789.123 using UDP port 9876. The server must be able to use the given address here, otherwise it will not be able to start up the SRT process. The connection is succesfull once a SRT caller process connects on the given address and port.Important Optional ParametersThe optional parameters are avaialble right under the Stream name and Source fields.



Parameter
Description




Buffer time (ms)
This is the live buffer within MistServer, it is the DVR window available for a live stream.


Acceptable pushed streamids
Here you can choose what happens if the ?streamid=name parameter is used if SRT connections matching this input are made. It can become an additional stream (wildcard), it can be ignored (streamid is not used, it is seen as a push towards this input instead or it can be refused.)


Debug
Set the amount of debug information


Raw input mode
If set the MPEG-TS information is passed on without any sort of parsing/detection by MistServer


Simulated live
If set the MistServer will not speed up the input in any way and play out the stream as if it’s coming in in real time


Always on
If set MistServer will continously try monitor for streams matching this input instead of only when a viewer attempts to watch the stream.


The most important optional parameter is the Always on flag. If this is set MistServer will continously monitor the given input address for matching SRT connections. If this is not set MistServer will only attempt to monitor for matching SRT connections if for about 20 seconds after a viewer tried to connect.SRT CALLER INPUTSRT Caller input means the server starts a SRT process that reaches out to another location in order to receive a stream. srt://host:port

The above starts a SRT process that reaches out the address 123.456.789.123 using UDP port 9876. In order for the SRT connection to be successfull there needs to be a SRT listener process on the given location and port.While it is technically possible to leave the host out of the scheme and go for a source like:srt://:port?mode=caller
It is not recommended to use. The whole idea of being the caller side of the connection is that you specifically know where the other side of the connection is. If you need an input capable of being connected to by unknown addresses you should be using SRT Listener Input.3. How to use SRT as outputSRT can be used as both a Listener output or a Caller output. A listener output means you wait for others to connect to you and then you send them the stream. Caller output means you send it towards a known location.SRT LISTENER OUTPUTThere’s two methods within MistServer to set up a SRT listener output. You can set up a very specific one through the push panel or a generic one through the protocol panel.
The difference is that setting up the SRT output over the push panel allows you to use all SRT protocols, which is important if you want to use parameters such as ?passphrase=passphrase that enforce an encrytpion passphrase to match or the connection is cancelled.
Setting SRT through the protocol panel only allows you to set a protocol. Anyone connecting to that port will be able to request all streams within MistServer.Push panel styleSetting up SRT LISTENER output through the push panel is perfect for setting up very specific SRT listener connections. It allows you to use all SRT parameters while setting it up.up a push stream with target:srt://:port?parameters
Once the SRT protocol is selected all SRT parameters become available at the bottom.Using the SRT parameter fields here is the same as adding them as parameters. You could use this to set a uniquepassphrase for pulling SRT from your server, which will be output-only.
If you add a host to the SRT scheme make sure you set the mode to listener.Protocol Panel StyleSetting up SRT Listener output through the protocol panel is done by selecting TS over SRT and setting up the UDP port to listen on.You can set the Stream, which means that anyone connecting directly to the chosen SRT port will receive the stream matching this stream name within MistServer.
However not setting allows you to connect towards this port and set the ?streamid=stream_name to select any stream within MistServer.To connect to the stream srt_input one could use the following srt address to connect to it:srt://mistserveraddress:8889?streamid=srt_input
SRT CALLER OUTPUTSetting up SRT caller output can only be done through the push panel. The only difference with a SRT listener output through the push panel is the mode selected.Automatic push vs pushWithin MistServer an automatic push will be started and restarted as long as the source of the push is active. This is often the behaviour you want when you send out a push towards a known location. Therefore we recommend using Automatic pushes.Setting up SRT CALLER OUTPUTsrt://host:port
The above would start a push of the stream live towards address 123.456.789.123 using UDP port 9876. The connection will be successful if a SRT listening process is available there.Using the SRT parameter fields here is the same as adding them as parameters.4. How to use SRT over a single portSRT can also be set up to work through a single port using the ?streamid parameter. Within the MistServer Protocol panel you can set up SRT (default 8889) to accept connections coming in, out or both.If set to incoming connections, this port can only be used for SRT connections going into the server. If set to outgoing the port will only be available for SRT connections going out of the server. If set to both, SRT will try to listen first and if nothing happens in 3 seconds it will start trying to send out a connection when contact has been made. Do note that we have found this functionality to be buggy in some implementations of Unix (Ubuntu 18.04) or highly unstable connections.Once set up you can use SRT in a similar fashion as RTMP or RTSP. You can pull any available stream within MistServer using SRT and push towards any stream that is setup to receive incoming pushes. It makes the overall usage of SRT a lot easier as you do not need to set up a port per stream.Pushing towards SRT using a single portAny stream within MistServer set up with a push:// source can be used as a target for SRT. What you need to do is push towardssrt://host:port?streamid=streamname
For example, if you have the stream live set up with a push:// source and your server is available on 123.456.789.123 with SRT available on port 8889 you can send a SRT CALLER output towards:srt://123.456.789.123:8889?streamid=live
And MistServer will ingest it as the source for stream live.Pulling SRT from MistServer using a single portIf the SRT protocol is set up you can also use the SRT port to pull streams from MistServer using SRT CALLER INPUT.For example, if you have the stream vodstream set up and your server is available on 123.456.789.123 with SRT available on port 8889 you can have another application/player connect through SRT CALLERsrt://123.456.789.123:8889?streamid=vodstream
5. Known issuesThe SRT library we use for the native implementation has one issue in some Linux distros. Our default usage for SRT is to accept both incoming and outgoing connections. Some Linux distro have a bug in the logic there and could get stuck on waiting for data while they should be pushing out when you’re trying to pull an SRT stream from the server.
If you notice this you can avoid the issue by setting a port for outgoing SRT connections and another port for incoming SRT connections. This setup will also win you ~3seconds of latency when used. The only difference is that the port changes depending on whether the stream data comes into the server or leaves the server.6. Recommendations and best practicesOne port for input, one for outputThe most flexible method of working with SRT is using SRT over a single port. Truly using a single port brings some downsides in terms of latency and stability however.
Therefore we recommend setting up 2 ports, one for input and one for output and then using these together with the ?streamid parameter.
This has the benefit of making it easier to understand as well, one port handles anything going into the server, the other port handles everything going out of the server.Getting SRT to work betterThere are several parameters (options) you can add to any SRT url to configure the SRT connection. Anything using the SRT library should be able to handle these parameters. These are often overlooked and forgotten. Now understand that the default settings of any SRT connection cannot be optimized for your connection from the get go. The defaults will work under good network conditions, but are not meant to be used as is in unreliable connections.
If SRT does not provide good results through the defaults it’s time to make adjustments.A full list of options you can use can be found in the SRT documentation.
Using these options is as simple as setting a parameter within the url, making them lowercase and stripping the SRTO_ part. For example SRTO_STREAMID becomes ?streamid= or &amp;streamid= depending on if it’s the first or following parameter.We highly recommend starting out with the parameters below as they are th emost likely candidates to provide better results.Latencylatency=120ms
Default 120msThis is what we consider the most important parameter to set for unstable connections. Simply put, it is the time SRT will wait for other packets coming in before sending it on. As you might understand if the connection is bad you will want to give the process some time. It’ll be unrealistic to just assume everything got sent over correctly at once as you wouldn’t be using SRT otherwise! Haivision themselves recommend setting this as:RTT_Multiplier * RTT
RTT = Round Time Trip, basically the time it takes for the servers to reach each other back and forth. If you’re using ping or iperf remember you will need to double the ms you get.RTT_Multiplier = A multiplier that indicates how often a packet can be sent again before SRT gives up on it. The values are between 3 and 20, where 3 means perfect connection and 20 means 100% packet loss.Now what Haivision recommends is using their table depending on your network constraints. If you don’t feel like calculating the proper value you can always take a step appraoch and test the latency in 5 steps. Just start fine tuning once you reach a good enough result.1:  4 x RTT
2:  8 x RTT
3: 12 x RTT
4: 16 x RTT
5: 20 x RTT 
Keep in mind that setting the latency higher will always result in a loss of latency. The gain is stream quality however. The best result is always a balancing act of latency and quality.Packetfilter?packetfilter=fec-options
This option enables forward error correction, which in turn can help stream stability. A very good explanation on how to tackle this is available here. Important to note here is that it is recommended that one side has no settings and the other sets all of them. In order to do this the best you should have MistServer set no settings and any incoming push towards MistServer set the forward error correction filter.While we barely have to use it. If we do we usually start out with the following:?packetfilter=fec,cols:8,rows:4
We start with this and have not had to switch it yet if mixed together with a good latency filter. Now optimizing this is obviously the best choice, but it helps to have a starting point that works.Passphrase?passphrase=uniquepassphrase
Needs at least 10 characters as a passphraseThis option sets a passphrase on the end point. When a SRT connection is made it will need to match the passphrase on both sides or else the connection is terminated. While it is a good method to secure a stream, it is only viable for single port connections. If you were to use this option with the single port connection all streams through that port would use the same passphrase, making it quite un-usable. If you’d like to use a passphrase while using a single port we recommend reading the PUSH_REWRITE token support post.If you want to use passphrase for your output we recommend setting up a listener push using the push panel style as explained in Chapter 3. Setting up SRT as a protocol would set the same passphrase for all connections using that port, which means both input and output.Combining multiple parametersTo avoid confusion, these parameters work like any other parameters for urls. So the first one always starts with a ? while every other starts with an &amp;.Example:srt://mistserveraddress:8890?streamid=streamname&amp;latency=16000&amp;packetfilter=fec,cols:4,rows:4
ConclusionHopefully this should’ve given you enough to get started with SRT on your own. Of course if there’s any questions left or you run into any issues feel free to contact us and we’ll happily help you!
]]></description>
  <link>https://news.mistserver.org/news/109/MistServer+and+Secure+Reliable+Transport+%28SRT%29</link>
  <guid>https://news.mistserver.org/news/109/MistServer+and+Secure+Reliable+Transport+%28SRT%29</guid>
  <pubDate>Mon, 07 Mar 2022 15:00:25 +0100</pubDate>
</item>
<item>
  <title>Migration instructions between 2.X and 3.X</title>
  <description><![CDATA[With the release of 3.0, we are releasing a version that has gone through extensive rewrites of the internal buffering system.

Many internal workings have been changed and improved. As such, there was no way of keeping compatibility between running previous versions and the 3.0 release, making a rolling update without dropping connections not feasible. 

In order to update MistServer to 3.0 properly, step one is to fully turn off your current version.
After that, just run the installer of MistServer 3.0 or replace the binaries.

Process when running MistServer through binaries


Shut down MistController
Replace the MistServer binaries with the 3.0 binaries
Start MistController


Process when running MistServer through the install script

Shut down MistController
Systemd:

systemctl stop mistserver

Service:

service mistserver stop

Start MistServer install script:

curl -o - https://releases.mistserver.org/is/mistserver_64Vlatest.tar.gz 2&gt;/dev/null | sh

Process for completely uninstall MistServer and installing MistServer 3.0

Run:

curl -o - https://releases.mistserver.org/uninstallscript.sh 2&gt;/dev/null | sh

Then:

curl -o - https://releases.mistserver.org/is/mistserver_64Vlatest.tar.gz 2&gt;/dev/null | sh

Enabling new features within MistServer

You can enable the new features within MistServer by going to the protocol panel and enabling them. Some other protocols will have gone out of date, like OGG (to be added later on), DASH (replaced by CMAF) and HSS (replaced by CMAF, as Microsoft no longer supports HSS and has moved to CMAF themselves as well). The missing protocols can be removed/deleted without worry. The new protocols can be added manually or automatically by pressing the “enable default protocols” button

Rolling back from 3.0 to 2.x

Downgrading MistServer from 3.0 to 2.x will also run into the same issue that it is unable to keep connections active, which means you will have to repeat the process listed above with the end binaries/install link being the 2.x version. If you deleted the old 2.X protocols during the 3.X upgrade, you will have to re-add them using the same “enable default protocols” method as well. It is safe to have both sets in your configuration simultaneously if you switch between versions a lot, or need a single config file that works on both.
]]></description>
  <link>https://news.mistserver.org/news/107/Migration+instructions+between+2.X+and+3.X</link>
  <guid>https://news.mistserver.org/news/107/Migration+instructions+between+2.X+and+3.X</guid>
  <pubDate>Mon, 14 Feb 2022 16:00:00 +0100</pubDate>
</item>
<item>
  <title>Why is all of MistServer open source?</title>
  <description><![CDATA[Hey there! This is Jaron, the lead developer behind MistServer and one of its founding members. Today is a very special day: we released MistServer 3.0 under a new license (Public Domain instead of aGPLv3) and decided to include all the features previously exclusive to the "Pro" edition of MistServer as well. That means there is now only one remaining version of MistServer: the free and open source version.

You may be wondering why we decided to do this, so I figured I'd write a blog post about it.

First some history! The MistServer project first started almost fifteen years ago, with a gaming-focused live streaming project that was intended to be a rival to the service that would later become known as Twitch. At the time, we relied on third-party technology to make this happen, and internet streaming was still in its infancy in general. Needless to say, that project failed pretty badly.

During a post-mortem meeting for that failed project, the live streaming tech we relied on came up as one of the factors that caused the project to fail. In particular how this software acted like a black box, and made it very tricky to integrate something innovative with it. The question came up if we could have done better, ourselves. We figured we probably could, and decided to try it. After all, how hard could it be to write live streaming server software, right..?

What we thought would be a short and fun project, quickly turned into something much bigger. The further we got, the more we discovered that video tech was - back then especially - a very closed off industry that is hard to get into. As we worked on the software, we came up with the idea that we wanted to change this. Open it up to newcomers, like we ourselves had tried, and make it possible for anyone with a good idea to make a successful online media service. There were several popular free and open source web server software packages, like Apache and Nginx - but all the media server software was closed (and usually quite expensive, as well). We wanted to do the same thing for media server software: create something open, free, and easy to use for developers of all backgrounds to enable creativity to flourish.

However, we also had people working on this software full-time that most definitely needed to be paid for their efforts. So while the first version of MistServer was already partially open source, we made a few hard decisions: we kept the most valuable features closed source, and the parts that were open were licensed under the aGPLv3. That license is an "infectious" open source license: it requires anyone communicating over a network with the software to get the full source code of the whole project. That would make it almost impossible to use in a commercial environment - both because of the missing features as well as the aggressive license.

That allowed us to then sell custom licensing and support contracts, while staying true to the ideas behind open source. Our plan was to eventually - as we had built up enough customer base and could afford to make this decision - release the whole software package as open source and solely sell support and similar services contracts. As we were funded by income from license sales, our growth was fairly restricted and thus slow and organic. We built up a good reputation, but were nowhere near being able to proceed with the plan we made at the start.

Over the years, we slowly did release some of the closed parts as open, but we had to be careful not to "give away" too much. To non-commercial users, we made available a very cheap version of the license without support. Our license and support contracts over time evolved to be mostly about support, and licensing itself more of an excuse to start discussing support terms. From my own interactions with our customers it has become clear that they stay with us because of the support we offer, and consider that the most valuable part of their contracts with us. However, the constrained growth did mean we were not able to fully commit to a business model that did not involve selling licenses.

Until, last October, Livepeer came along. They have a similar goal and mindset as the MistServer team did and does, which meant they not only understood our long-term plan, but believed that with the increased funding flow they brought to us, it could now finally be executed!

So, it may seem like a sudden change of course for us to release the full software as open source today, but nothing could be further from the truth. It's something we've believed in and have been wanting to do right from the start of the project. Words are lacking to describe how it feels to finally be able to come full circle and complete a plan that has been so long in the making. It's an extremely exciting moment for us, and I speak for the whole team when I say we're looking forward to continuing to improve and share MistServer with the world.
]]></description>
  <link>https://news.mistserver.org/news/106/Why+is+all+of+MistServer+open+source%3F</link>
  <guid>https://news.mistserver.org/news/106/Why+is+all+of+MistServer+open+source%3F</guid>
  <pubDate>Mon, 14 Feb 2022 15:59:56 +0100</pubDate>
</item>
<item>
  <title>Skins for the MistServer Meta-player</title>
  <description><![CDATA[Hi there!
In January 2019 we released MistServer 2.15, which included our reworked meta-player: a player that selects the best streaming protocol and player for the used stream and browser. New in this rework is just how much it can be customised - change just a few colours, add your logo or completely overhaul the look and feel: everything is possible.
In this post I, Carina, would like to highlight a few of these skinning capabilities.

I will assume you already have some knowledge of how to embed the player on your page, and of how HTML, Javascript and CSS works.

Let's transform the appearance of the meta-player from its default look of this:



..to this:



I'll take you through the steps one by one, so that you'll be able to use what you want for your own project.
You can find the files I used for this example here. 

The basics

So, how do we change the way the player looks? Well, the bit of Javascript that builds the player, mistPlay(), accepts options, and one of those is the skin option.
You can read more about all the possibilities in our manual in chapter 5.4.

The entire skin can be defined in the options that are given to mistPlay(), or it can be defined elsewhere and the skin name can be passed to mistPlay() instead. It'll look something like this:

mistPlay(streamname,{
  skin: {
    colors: { accent: "red"; }
  }
});


or

MistSkins.custom = {
  colors: { accent: "red"; }
};
mistPlay(streamname,{
  skin: "custom"
});


The key thing to remember when defining a skin is that your settings are parsed as changes to the default skin. That means you don't have to build everything from scratch, you can just change what you'd like to see differently. Should you want to base your work on another skin, you can specify it like this:

MistSkins.even_more_custom = {
  inherit: "custom"
}


1) Change a colour

Let's start small.
Suppose I have this awesome website, and I've used red accents to spice things up or highlight certain features. Wouldn't it be nice if the player also used this same colour?
We could change other colours too, but they are more neutral so let's leave those alone. 

The skin object would become:

MistSkins.step1 = {
  colors: {
    accent: "red"
  }
};


Of course, you don't have to use a named colour. You can use any colour definition that CSS would understand.

Now, the player looks like this:



2) Add a CSS rule

The default skin has a rather pronounced background. Say we'd like to see something more subtle. What about a gradient? That'd be pretty cool right?
If we use the web inspector to inspect the control bar, we can see that the element with class mistvideo-main has the background colour. That means we should probably add a CSS rule to override it.
CSS rules for a skin are taken from a file. This is how you add a new one:

MistSkins.step2 = {
  inherit: "step1",
  css: { custom: "myskin.css" }
};


The file itself can contain pure CSS, but it can also have variables, marked with a dollar sign. These are replaced with the values that are defined within the colors object. (We don't use CSS variables because they are not supported in older browsers that the player does need to support)

To set a background gradient, let's put in this:

.mistvideo-main {
  background: linear-gradient(to top, $background ,transparent);
}


Now the control bar background will be the black background colour (inherited from the default skin) fading to transparent.

While I'm there, I'm going to give the buttons a bit more breathing room, and I've also decided I'd like to see our red accent colour a bit more often. 

.mistvideo-main &gt; * {
  margin-right: 0.5em;
}

.mistvideo-controls svg.mist.icon:hover .fill,
.mistvideo-controls svg.mist.icon:hover .semiFill,
.mistvideo-volume_container:hover .fill,
.mistvideo-volume_container:hover .semiFill {
  fill: $accent;
}
.mistvideo-controls svg.mist.icon:hover .stroke,
.mistvideo-volume_container:hover .stroke {
  stroke: $accent;
}


The player now looks like this:



3) Adding a logo

To further personalise the player, let's add a logo to the top right of the video.

We can override parts of the way the elements are arranged in the DOM through the structure object. In this case we're looking for the part that's called videocontainer. 

It's probably easiest to steal the videocontainer structure from the default skin, and then add the logo to it.
To do this, open the web inspector on a page where the Meta-player is active, go to the console and type MistSkins.default.structure.videocontainer.

You should get this as an answer:
{type: "video"}

That means that the videocontainer structure currently consists of one element, represented by an object with type: "video". It's referring to what we call a blueprint named video, that will return a DOMelement containing the video tag.
We'll want to change the videocontainer to a div, that contains both our logo and the video element. This is how:

MistSkins.step3 = {
  inherit: "step2",
  structure: {
    videocontainer: {
      type: "container",
      children: [
        {type: "video"},
        {
          type: "logo",
          src: "//mistserver.org/img/MistServer_logo.svg"
        }
      ]
    }
  }
}


I'll add some styling to the CSS as well. An element generated by a blueprint will always receive mistvideo-&lt;BLUEPRINT TYPE&gt; as a class.

.mistvideo-logo {
  position: absolute;
  width: 10%;
  top: 1em;
  right: 1em;
  filter: drop-shadow(#000a 1px 1px 2px) brightness(1.2);
}


And tadaa, here's the logo!



4) Change the structure

There's a bunch of buttons on the control bar, some of which might not make sense for the video's you're streaming. Let's clean it up a bit!

This time we'll need to adapt the controls structure. This one is quite a bit more complicated than the videocontainer one.. 
Let's retrieve the default one again. To get something copy-paste-able, use this:
JSON.stringify(MistSkins.default.structure.controls,null,2)

I've put the progress bar at the bottom as it was floating around in mid air a little,
I've moved the currentTime and totalTime blueprints to the right of the control bar, the volume control to the left, and I've removed the loop, fullscreen and track switcher controls.
That gives me this:

MistSkins.step4 = {
  inherit: "step3",
  structure: {
    controls: {
      type: "container",
      classes: ["mistvideo-column", "mistvideo-controls"],
      children: [
        {
          type: "container",
          classes: ["mistvideo-main","mistvideo-padding","mistvideo-row","mistvideo-background"],
          children: [
            {
              type: "play",
              classes: ["mistvideo-pointer"]
            },
            {
              type: "container",
              children: [
                {
                  type: "speaker",
                  classes: [
                    "mistvideo-pointer"
                  ],
                  style: {
                    "margin-left": "-2px"
                  }
                },
                {
                  type: "container",
                  classes: [
                    "mistvideo-volume_container"
                  ],
                  children: [
                    {
                      type: "volume",
                      mode: "horizontal",
                      size: {
                        height: 22
                      },
                      classes: [
                        "mistvideo-pointer"
                      ]
                    }
                  ]
                }
              ]
            },
            {
              type: "container",
              classes: ["mistvideo-align-right"],
              children: [
                {type: "currentTime"},
                {type: "totalTime"}
              ]
            }
          ]
        },
        {
          type: "progress",
          classes: ["mistvideo-pointer"]
        }
      ]
    }
  }
};


The player now looks like this:



5) Changing an icon

But we can do more than just moving elements around. We can also change the way something looks. Let's say we want to change the way the volume is displayed to a simple horizontal slider with a circle.

The volume blueprint constructs the element from the icon library. It can be overridden like this:

MistSkins.step5 = {
  inherit: "step4",
  icons: {
    volume: {
      size: {
        width: 100,
        height: 50
      },
      svg: `
        &lt;rect y="21" width="100%" height="15%" fill-opacity="0.5" class="backgroundFill"/&gt;
        &lt;rect y="21" width="50%" height="15%" class="slider horizontal fill"/&gt;
        &lt;g transform=translate(10,0)&gt;
          &lt;svg width="50%" height="100%" class="slider horizontal"&gt;
            &lt;g transform=translate(-10,0)&gt;
              &lt;circle cx="100%" cy="25" r="10" class="fill"/&gt;
            &lt;/g&gt;
          &lt;/svg&gt;
        &lt;/g&gt;
      `
    }
  }
};


It now contains a full length black rectangle.
On top of that, there's another rectangle with the class "slider horizontal". The volume blueprint will change the width of this element to display the volume level.
Lastly, there's a nested svg containing the circle marking the end of the volume level indicator.

Because the tooltip is rather big for this more minimalistic volume bar, I've added some CSS rules to our file:

.mist.icon .backgroundFill {
  fill: $background;
}
.mistvideo-volume .mistvideo-tooltip {
  bottom: auto !important;
  top: -2.5px !important;
  left: 2.5px !important;
  font-size: 0.8em;
  padding: 0;
  background: none;
}
.mistvideo-volume .mistvideo-tooltip .triangle {
  display: none;
}


Now, our player looks like this:



6) Using a hoverWindow blueprint

The last thing I'm not happy with is that the volume slider is always visible. There's really no need, it's okay if it only shows up when the cursor is above the speaker icon.
To achieve that, we can put the volume slider inside a blueprint that's called a hoverWindow. The hoverWindow accepts a structure object as a 'button'. When the cursor hovers over that element, the structure object that's defined as the 'window' is shown.

MistSkins.step6 = {
  inherit: "step5",
  structure: {
    controls: {
      type: "container",
      classes: ["mistvideo-column", "mistvideo-controls"],
      children: [
        {
          type: "container",
          classes: ["mistvideo-main","mistvideo-padding","mistvideo-row","mistvideo-background"],
          children: [
            {
              type: "play",
              classes: ["mistvideo-pointer"]
            },
            {
              type: "hoverWindow",
              mode: "right",
              transition: {
                show: "opacity: 1",
                hide: "opacity: 0",
                viewport: "overflow: hidden"
              },
              classes: ["mistvideo-volume_container"],
              button: {
                type: "speaker" //mute/unmute button
              },
              window: {
                type: "volume", //volume slider
                mode: "horizontal",
                size: {height: 22}
              }
            },
            {
              type: "container",
              classes: ["mistvideo-align-right"],
              children: [
                {type: "currentTime"},
                {type: "totalTime"}
              ]
            }
          ]
        },
        {
          type: "progress",
          classes: ["mistvideo-pointer"]
        }
      ]
    }
  }
};


All done! The player now looks like this:



Conclusion

I hope that's shown you just how customizable our Meta-player is.
Did you get ideas for your own projects? We'd love to see what you've made! Also feel free to contact us if there's something you're stuck on which you can't find in the manual, or if you have any suggestions.

Bye for now, and see you next time!


]]></description>
  <link>https://news.mistserver.org/news/102/Skins+for+the+MistServer+Meta-player</link>
  <guid>https://news.mistserver.org/news/102/Skins+for+the+MistServer+Meta-player</guid>
  <pubDate>Sun, 01 Mar 2020 12:00:02 +0100</pubDate>
</item>
<item>
  <title>Easy SSL for MistServer through Certbot (Linux-specific)</title>
  <description><![CDATA[Hello everyone! This post we wanted to highlight a new feature in the latest MistServer builds (since version 2.17). This version not only added an integration with CertBot for Let’sEncrypt SSL certificates, but also added all SSL functionality to the Open Source edition of MistServer.

Before we start: if you're using MistServer together with a webserver (as in running MistServer on the same server that is hosting your website) we recommend using a reverse proxy. It just makes more sense to have a single SSL certificate, and this will also allow you to run MistServer on the same port as your website which looks more professional. So, this guide is only useful for setups that run MistServer “standalone”, without a webserver on the same machine. That said, let's dig into it!

With version 2.17 of MistServer we added a new tool in your MistServer install called “MistUtilCertbot”. This tool takes care of Certbot integration, meaning the entire setup can now be done with just a single command! (After both MistServer and Certbot are installed first, of course.)

Install Certbot

Certbot is a *Nix only tool that's meant for easy SSL certificate management. It's a package in most distributions of Linux, so we recommend using your distribution’s package manager to install it. More information on Certbot can be found here, and distribution-specific instructions can be found here.

Run MistUtilCertbot through Certbot

Once installed you can have Certbot set up your MistServer HTTPS certificate by running the following command (run this command as the same user you would usually run certbot as; it does not matter what user MistServer is running as):

certbot certonly --manual --preferred-challenges=http --manual-auth-hook MistUtilCertbot --deploy-hook MistUtilCertbot -d DOMAIN01 -d DOMAIN02 -d ETC


You'll have to change the DOMAIN01,DOMAIN02,ETC part into your own domain(s), other than that there’s no need to make changes.

Set up auto-renewing of CertBot certificates
This differs per distribution, so we recommend following the “Set up automatic renewal” step on certbot’s instructions page. There is no need to follow any of the other steps, as the rest is taken care of by our integration.

Done
That's it! Your MistServer now has SSL enabled and it will be auto-renewing monthly!

Note: Currently a bug can appear where the last step does not activate certbot correctly and no HTTPS protocol appears within MistServer. If you're experiencing this you can solve this by running the following command:

 RENEWED_LINEAGE=/etc/letsencrypt/live/DOMAIN01/ MistUtilCertbot -g 10


Replace DOMAIN01 with the first given domain from your original certbot command.

Note 2: Some distributions of linux come with a /etc/hosts file that does not assign an IPv6 address to localhost.

You will recognize this being an issue if your Certbot command keeps timing out.

MistServer running on an IPv6 capable system will require this entry, so please add localhost to any existing line starting with ::1 or add the following line to your /etc/hosts:

::1            localhost

]]></description>
  <link>https://news.mistserver.org/news/101/Easy+SSL+for+MistServer+through+Certbot+%28Linux-specific%29</link>
  <guid>https://news.mistserver.org/news/101/Easy+SSL+for+MistServer+through+Certbot+%28Linux-specific%29</guid>
  <pubDate>Mon, 28 Oct 2019 13:37:33 +0100</pubDate>
</item>
<item>
  <title>Transcript: Common OTT Problems and How to Fix Them</title>
  <description><![CDATA[Hello everyone,

This blog post covers the presentation our CTO Jaron gave during the IBC2019. The presentation was about common OTT problems and how to fix them, we're sure it's a great view if you've got some questions about what protocol, codec or scaling solution you should pick. The slides are available here.



Transcript follows

Alright! Well hello everyone, as was mentioned my name is Jaron. There we go... Hi, my name is Jaron, the CTO of DDVTech. We build MistServer, so if you hear DDVTech or MistServer, one is the company and the other is the product. We're right over there. Well of course we fix all OTT problems, but you might not be using MistServer or not wanting to use MistServer so you might want to know how we solve these problems. Some of the common ones, at least.

So, I present: OTT de-Mist-ified! How you would solve these problems if you weren't using us. I'm first going to go through a little bit of OTT history and then dive into the common problems of protocol selection, codec selection, segmenting and scaling. And I'm going to try and not talk as fast because otherwise this is going to be over in a few
minutes. I'm sure everyone would be happy with me not talking so fast.

Alright, history-wise. Well, first internet streaming was mostly audio-only because of bandwidth concerns. People were using modems over phone lines and "surprisingly" these tend to have enough bandwidth for about a conversation or so. It's like they were designed for it!

Then a little bit later, as bandwidth increased, people did that we now call VoD: pre-recorded videos.
Back then it was just... they were uploaded to a website, you would download them, and play them in a media player. Because browsers playing video back then, well, it's just... yeah, that was it. That was not something you could take serious.

Shortly after, IP cameras appeared which used the real time streaming protocol (RTSP). Most still use RTSP to this day (even the very modern ones), because it's a very robust protocol. But, it doesn't really work on the rest of the Internet (in browsers); it just works for dedicated applications like that.

Then, Adobe came with their Flash Player and suddenly we had really cool vector animation online, and progressive streaming support. Later they also
added RTMP (Real Time Media Protocol) which is still sort of the standard for user contribution.

Now we're arrived roughly today, and we have HTML5. That is proper media support on the Internet, finally! And, well, Flash went away in a flash (chuckle).

That's where we are now. What we call OTT: streaming over the internet instead of over traditional broadcast.

Let's go into the problems now. What should we do - today - to do this?

Protocol selection: protocol selection is mostly about device compatibility, because no matter where you want to stream there's some target device that you want to have working.

This could be people's cell phones, it could be set-top boxes, smart TVs... There's always something you want to make sure works, and you can't make everything work. Well, we do our best! But making everything work is infeasible, so you tend to focus on a few target groups of devices that you want to work.
This decision basically boils down to, if you are targeting iOS you're using:


HLS or WebRTC for live, because those are the only two that Apple will allow for live on iOS.
For VoD you tend to do MP4 because that does work and it's a bit easier to work with. Though you could do VoD over HLS and WebRTC if you really wanted to.


For most other consumer devices it's a mixture of protocols that tend to work roughly equally well:


MP4 and WebM which are similar to file downloads, except they play directly in-browser.
The fancy new kid on the block, WebRTC, which does low latency specifically.
The segmented protocols, which are HLS, DASH and CMAF. They are roughly equivalent nowadays.


For special cases like IP cameras or dedicated devices like set top boxes, RTSP, RTMP and TS are still used in combination with the rest.

For most people it tends to come down to using HLS for live and MP4 for VoD because that's the combination that works on practically everything. It's not always ideal because HLS tends to have a lot of latency and not everything might work well with MP4 because it has a very large header so it can take a while to download and start. So, you might want to pick a different protocol instead.

So! That's how you should select your protocol, roughly of course. I'm going to do this pretty quickly because you can't go into all the problems in one go.

The next problem is codec selection. Now, codec selection is also largely about device compatibility because depending on what chipset is in a device you may or may not have hardware accelerated decoding support.
Hardware accelerated decoding support is important, because if it's not there you can literally feel the phone burning up in your hand. It's trying to decode something that it doesn't know how to do. Without a chip it's doing it in software, software requires CPU, which requires power, which burns up your battery. So, if you're doing anything mobile you need to have hardware acceleration, and hardware acceleration (included on chips) tends to change over time.

Right now, H.264 is the most widely supported codec. It works on pretty much every consumer device that still works. Maybe some people have something really old from 15 years ago that's somehow still functional; then you might have a problem, but I don't think anyone expects modern video to play on devices like that anymore.

We also have HEVC (which is also known as H.265, it's two names for same codec). It's on newer devices. It works great, gives you better quality per byte of data. The annoying parts are:


Not all devices have it. Just the more modern ones.
The patent pool is a nightmare. H.264 also has a pattern pool, but it's pretty clear how you pay royalties. With HEVC no one even really knows who you should be paying and how much and so people tend to stay away from it, especially because not all devices are compatible.


For the future, I'm kind-of hoping that AV1 is going to be the next big thing. Because there's so many companies backing it; there's hardware vendors doing it. I'm guessing that within the next few years we will see support for AV1 on all modern devices. And, since there are theoretically no patent pools for AV1 it would also be free to use. I think if all devices start adding this and it's free to use, it would be pretty much a no-brainer to switch to AV1 in the future. So, prepare for AV1 and use H.264 today. If you really care about limiting bandwidth as much as you can while keeping high quality you might want to think about HEVC as well.

Alright, next problem: segmenting. This is specific to segmented protocols, so: HLS, DASH, CMAF. Also HSS,
but that's not very popular anymore since it's mostly just CMAF with a different name. These protocols work with small segments of data, so basically smaller downloads of a couple seconds (sometimes milliseconds or minutes) of media data that are then played one after another. The idea is that you can keep the old methods of downloading files and sending files over regular plain old (web)servers without having to put anything specific for media in there. You can still do live streams, because you have tiny little pieces that you can keep adding to the stream and then removing the old ones.

The issue with segmenting is how long those segments are. Do you make them one second? Half a second? A minute? The things that are affected by segmenting are startup time, latency and stability. When it comes to startup time, smaller segments load faster, because it takes less time to download. That means that the startup time is reduced. So, if you want low startup time make small segments. The same goes for latency: because there are smaller segments the buffer on the client side can be smaller and then latency is lower. This is the technique that everyone is using to do low latency HLS over the last few years. There's new stuff coming out soon, but this is the current technology. Basically they make the segments really, really, small and the latency is low. They call it all kinds of fancy names but that's what they're doing underneath.

The big downside to small segments is stability, because longer segments are much more stable and have less overhead. So you're wasting more bandwidth by doing small segments, and you're decreasing your stability by doing small segments. If there's something wrong with the connection and even one segment is missing your stream starts buffering or stalling and nobody wants that. It's a constant battle between making them long or short and it depends on if you care more about the the latency and startup time or more about the stability. The annoying part is that most people want all three and you sadly can't have all three so the key thing is knowing your market, where your priorities are, and if you're doing something where latency is a big thing and startup time is a big thing.

For example if you're in the gambling industry or something you want to do small segments. Or, maybe not even go with segments at all but use a real real-time streaming protocol like WebRTC or RTSP. If you're more traditional and sending out content then it's a good idea to make longer segments. It will mean it will take slightly longer to start up, but it will play much more smoothly and it will use less bandwidth so it's all about knowing what you're doing and picking the configuration that goes with that.

All right the final problem is scaling. Scaling can mean two different things: it can either mean delivering at scale, so you have lots and lots of viewers or lots of lots of different broadcasts or even lots of both; or it can mean that you want to scale up and down as demand changes. Maybe as a platform that has a few people using it at night and then during day time the usage explodes and goes way up and then at night it goes way back down. It would be a waste to have servers running all night long not doing anything, so you kind of want to turn most of them off and then you want to put them back up in the morning. Something like that. There are several approaches to solving the scaling problem.

You can solve it by, for example, partially using cloud capacity. The capacity you would always need you would have in-house, and then on the peak times you put some extra cloud capacity in there. It tends to be more expensive but since it's really easy to turn it on and off people love adding that for their peak times.

You could use a CDN to offload the scaling problem. You create the content, you send it to the CDN, and now it's their problem. It's not really solving it, it's more moving it to someone else. Which is nice, because they'll solve it for you.

Peer to peer can be be a solution. A couple companies here to do that. By sending the data between all your different viewers you don't have to send it all from your own servers. You can save bandwidth and make scaling slightly easier. The problem is peer to peer only really works well if you have a lot of viewers watching the same moment in the same stream. So, for stuff like the Olympics or something this will work great, but if it's like, you know, two years ago this episode of some TV show... you're probably not going to have a good time doing this.

Of course there's the traditional adding and removing of servers. Which is a pretty obvious way to do things, but it's hard to do logistically, because you need to wait (lead time on physical servers) to do this.

Load balancing is required for most of these things, if you want to do them. Deciding where viewers are going to go. Are they going to go to a particular server, or not? You can sort-of move them away from servers you
want to turn off, and then move them to the servers you want to keep on, and switch them in and out as needed this way.

There's always just buying more bandwidth for your existing servers, or having some kind of deal with your bandwidth provider that lets you use more or less depending on time of day. And combining any of these.
There's no real easy answer to this, and it really depends a lot of how your business is structured.

Key to doing this (scaling) properly is having good statistics and metrics on your system. If you know what the usage is during a particular time of day, you can prepare for it. There tends to be a very clear pattern in these things. So if you know what's happening and how much load you're expecting for particular events or times of day, you can kind-of anticipate it, and make sure that these above things are done in time and not after the fact.

Alright! So back to us, MistServer. We provide ready-made solutions for all of these problems and many others. So you don't have to reinvent the wheel yourself, and you can sort-of offload some of the thinking and the troubleshooting to us. If you're facing some other problem we can probably help as well.

You can ask questions if you would like to, or you can drop by our booth right there, or shoot us an email if you're watching this online, or if you come up with a question later after the show.
]]></description>
  <link>https://news.mistserver.org/news/100/Transcript%3A+Common+OTT+Problems+and+How+to+Fix+Them</link>
  <guid>https://news.mistserver.org/news/100/Transcript%3A+Common+OTT+Problems+and+How+to+Fix+Them</guid>
  <pubDate>Wed, 16 Oct 2019 13:50:45 +0100</pubDate>
</item>
<item>
  <title>Transcript: Making sense out of the fragmented OTT delivery landscape.</title>
  <description><![CDATA[Hello Everyone,

Last month was IBC2018 and they've finally released the presentations. We thought it would be nice to add our CTO Jaron's presentation as he explains how to make sense of the fragmented OTT landscape.

You can find the full presentation, slides and a transcript below.



slides

Transcript

SLIDE #1

Alright, hello everyone. Well as Ian just introduced I'm going to talk about the fragmented OTT delivery landscape. 

SLIDE #2

Because, well, it is really fragmented. There are several types of streaming.

To begin we've got real time streaming, the streaming that everyone knows and loves, so to speak.

And we have pseudo streaming, which is like if you have an HTTP server and you pretend a file is there, but it's not really a file it's actually a live stream and you're just sending it pretending there’s a file.

But that wasn't enough, of course! Segmented streaming came afterwards - which is the current popular method, where you segment a stream in several parts and you have an index and then that index just updates with new parts as they become available.

Now it would be nice if it was this simple and it was just three methods, but unfortunately it is a little bit more complicated.

SLIDE #3

All of these methods have several protocols they can use to deliver. There's different protocols for real-time, pseudo and segmented and of course none of these are compatible with each other.

There are also players, besides all this. There is a ton of them, just in this hall alone there's at least seven or eight and they all say they do the same thing; and they do, and they all work fine. But how do you know what to pick? It's hard.

SLIDE #4

We should really stop making new standards.

SLIDE #5

So, real time streaming was the first one i mentioned. There's RTMP, which is the well known protocol that many systems still use as their ingest. But it's not used very often in delivery, as Flash is no longer supported in browsers. It's a very outdated protocol, it doesn't support HEVC, AV1, Opus; all the newer codecs aren't in there. But the protocol itself is highly supported by encoders and streaming services. It's something that’s hard to get rid of.

Then there's RTSP with RTP at the core, which is an actual standard unlike RTMP which is something Adobe just invented. RTSP supports a lot of things and it's very versatile. You can transport almost anything through it, but it's getting old. There's an RTSP version 2, which no one supports, but it exists. Version one is well supported but only in certain markets, like IP cameras. Most other things not as much and in browsers you can forget about it.

And then there's something newer for real-time streaming, which is WebRTC. WebRTC is the new cool kid on the block and it uses SRTP internally which is RTP with security added. Internally it's basically the same thing, but this works on browsers, which is nice as that means you can actually use it for most consumers unlike RTSP.

That gives you a bit of an overview of real time streaming. Besides these protocols you can also pick between TCP and UDP. TCP is what most internet connections use. It's a reliable method to send things, but because of it being reliable it's a bit slower and the latency is a bit higher. UDP is unreliable but has very low latency. Depending on what you're trying to do you might want to use one or the other.

All of these protocols work with TCP and/or UDP. RTMP is always TCP, RTSP can be either and WebRTC is always UDP. The spec of WebRTC says it can also use TCP, but I don't know a single browser that supports it so it's kind-of a moot point.

SLIDE #6

Then there's pseudo streaming which as i mentioned before uses fake files that are playing while they're downloading. They're infinite in length and duration, so you can't actually download them. Well, you can, but you end up with a huge file and you don't know exactly where the beginning and the end is, so it's not very nice.

While pseudo streaming is a pretty good idea in theory, there are some downsides. Besides disagreement on what format to pseudo-stream in, because there's lots of formats - like FLV, MP4, OGG, etcetera - and they all sort of work but none perfectly. The biggest downside is that you cannot cache these as they're infinite in length. So a proxy server will not store them and you cannot send them to a CDN, plus how do you decide where the beginning and end are? So pseudo streaming is nice method, but it doesn't work on a scalable system very well.

SLIDE #7

Now segmented streaming kind of solves that problem, because when you cut the stream into little files and have an index to say where those files are you can upload those little files to your CDN or cache them in a caching server and these files will not change. You just add more and remove others and the system works.

There are some disagreements here too between the different formats. Like what do we use to index? HLS uses text, but DASH uses XML. They contain the same information, but differ in the way of writing it. The container format for storing the segments themselves is also not clear: HLS uses TS and DASH uses MP4. Though they are kind of standardizing now to fMP4, but let's not go too deep here. The best practices and allowed combinations of what codecs work and which ones do not, do you align the keyframes or not - all of that differs between protocols as well. It’s hard to reach an agreement there too.

The biggest problem in segmented streaming is the high latency. Because many players want to buffer a couple of segments before playing, which means if your segments are several seconds long you will have a minimal latency of several times a couple of seconds. Which is not real-time in my understanding of the word “real-time”.

The compatibility with players and devices is also hard to follow. HLS works fine in iOS, but DASH does not unless it's fMP4 and you put a different index in front of it and it'll then only play on newer iOS models. It's hard to keep track of what will play where, that's also a kind of fragmentation you will need to solve for all of this.

SLIDE #8

So I kind of lied during the introduction when I had this slide up, there's even more fragmentation other than just these 3 types and their subtypes.

There's also encrypted streaming. When it comes to encrypted streaming there's Fairplay, Playready, Widevine and CENC which tries to combine them a little bit. But even in that they don't agree on what encryption scheme to use. So encryption is even fragmented into two different levels of fragmentation.

Then there are reliable transports now, which are getting some popularity. These are intended for between servers, because you generally don't do this to the end consumer. There are several options here too: some of these are companies/protocols that have been around for a while, some are relatively new, some are still in development, some are being standardized and some are not. That's also a type of fragmentation you may have to deal with if you do OTT streaming.

SLIDE #9

When it comes to encrypted streaming there is the common encryption standard, CENC. Common encryption, that is what it stands for, but it's not really common because it only standardizes the format and how to transport it. It standardizes on fMP4, it standardizes on where the encryption keys are, etc. But not what type of encryption to use. All encryption types use a block cipher, but some are counter based and others are not. So depending on what type of DRM you're using you might have to use one or the other. It's not really standardized, yet it is, so it's confusing on that level as well.

SLIDE #10

Then the reliable transports, they are intended for server to server. All of them use these techniques in some combination. Some add a little bit of extra fluff or remove some of it. But they all use these techniques at the core.

Forward error correction sends extra data with the stream that allows you to calculate the contents of data that is not arriving. This means not wasting any time asking for retransmits, since you can just recalculate what was missing so you don't have to ask the other server and have another round-trip in between.

Retransmits are sort of self-explanatory, where the receiving end says "hey i didn't receive package X can you retransmit it, send me another copy". This wastes time but eventually you do always get all the data so you can resolve the stream properly.

Bonding is something on a different level altogether where you connect multiple network interfaces like wireless network and GSM and you send data over both, hoping that with the combination of everything it will all end up arriving.

If you combine all three techniques of course you will get really good reception, at the cost of lots of overhead.

There's no standardization at all yet on reliable transports. It's very unclear what the advantages and disadvantages are of all these available ones. The ones listed in the previous slide all claim to be the best, to be perfect and to use a combination of the techniques. There's no real guide as to which you should be using.

So... lots of fragmentation in OTT.

SLIDE #11

So what do you do to fix all that fragmentation? Now this is where my marketing kicks in. 

Right there is our booth, we are DDVTech, we make MistServer and it's a technology you can use to build your own systems on top of. We give you the engine you use underneath your own system and we help you solve all of these problems so you can focus on what makes your business unique and not have to worry about standardization, what to implement and what the next hot thing tomorrow is going to be.

We also allow you to auto-select protocols based on the stream contents and device you're trying to play on or what the network conditions are. Basically everything you need to be successful when you're building an OTT platform.

SLIDE #12

That’s the end of my presentation, if you have any questions you can drop by our booth or shoot us an email on our info address and we’ll help you out and get talking.
]]></description>
  <link>https://news.mistserver.org/news/92/Transcript%3A+Making+sense+out+of+the+fragmented+OTT+delivery+landscape.</link>
  <guid>https://news.mistserver.org/news/92/Transcript%3A+Making+sense+out+of+the+fragmented+OTT+delivery+landscape.</guid>
  <pubDate>Mon, 29 Oct 2018 13:00:57 +0100</pubDate>
</item>
<item>
  <title>How to build a Twitch-alike service with MistServer</title>
  <description><![CDATA[Hey all! First of all, our apologies for the lack of blog posts recently - we've been very busy with getting the new 2.14 release out to everyone. Expect more posts here so that we can catch back up to our regular posting pace!

Anyway - hi 👋! This is Jaron, not with a technical background article (next time, I promise!) but with a how-to on how you can build your own social streaming service (like Twitch or YouTube live) using MistServer.
We have more and more customers running these kind of implementations lately, and I figured it would be a good idea to outline the steps needed for a functional integration for future users.

A social streaming service, usually has several common components:


A login system with users
The ability to push (usually RTMP) to an "origin" server (e.g. sending your stream to the service)
A link between those incoming pushes and the login system (so the service knows which stream belongs to which user)
A check to see if a viewer is allowed to watch a specific stream (e.g. paid streams, password-protected streams, age restricted stream, etc)
The ability to record streams and play them back later as Video on Demand


Now, MistServer can't help you with the login system - but you probably don't want it to, either. You'll likely already have a login system in place and want to keep that and its existing database. It's not MistServer's job to keep track of your users anyway. The Unix philosophy is to do one thing and do it well, and Mist does streaming; nothing else.

How to support infinite streams without configuring them all

When you're running a social streaming service, you need to support effectively infinite streams. MistServer allows you to configure streams over the API, but that is not ideal: Mist start to slow down after a few hundred streams are configured, and the configuration becomes a mess of old streams.

Luckily, MistServer has a feature that allows you to configure once, and use that stream config infinite times at once: wildcard streams. There's no need to do anything special to activate wildcard mode: all live streams automatically have it enabled. It works by placing a plus symbol (+) behind the stream name, followed by any unique text identifier. For example, if you configured a stream called "test" you could broadcast to the stream "test", but also to "test+1" and "test+2" and "test+foobar". All of them will use the configuration of "test", but use separate buffers and have separate on/off states and can be requested as if they are fully separate streams.

So, a sensible way to set things up is to use for example the name "streams" as stream name, and then put a plus symbol and the username behind it to create the infinite separate streams. For example, user "John" could have the stream "streams+John".

Receiving RTMP streams in the commonly accepted format for social streaming

Usually, social streaming uses RTMP URLs following a format similar to rtmp://example.com/live/streamkey.
However, MistServer uses the slightly different format rtmp:example.com/passphrase/streamname. It's inconvenient for users to have to comply with Mist's native RTMP URL format, so it makes sense to tweak the config so they are able to use a more common format instead.

The ideal method for this is using the RTMP_PUSH_REWRITE trigger. This trigger will call an executable/script or retrieve an URL, with as payload the RTMP URL and the IP address of the user attempting to push, before MistServer does any parsing on it whatsoever. Whatever your script or URL returns back to MistServer is then parsed by MistServer as-if it was the actual RTMP URL, and processing continues as normal afterwards. Blanking the returned URL results in the push attempt being rejected and disconnected. Check MistServer's manual (under "Integration", subchapter "Triggers") for the documentation of this trigger.

An example in PHP could look like this:

&lt;?PHP
//Retrieve the data from Mist
$payload = file_get_contents('php://input');
//Split payload into lines
$lines = explode("\n", $payload);
//Now $lines[0] contains the URL, $lines[1] contains the IP address.

//This function is something you would implement to make this trigger script "work"
$user = parseUser($lines[0], $lines[1]);
if ($user != ""){
  echo "rtmp://example.com//streams+".$user;
}else{
  echo ""; //Empty response, to disconnect the user
}
//Take care not to print anything else after the response, not even any newlines! MistServer expects a single line as response and nothing more.


The idea is that the parseUser function looks up the stream key from the RTMP URL in a database of users, and returns the username attached to that stream key. The script then returns the new RTMP URL as rtmp://example.com//streams+USERNAME, effectively allowing the push as well as directing it to a unique stream for the authorized user. Problem solved!

How to know when a user starts/stops broadcasting

This one is pretty easy with triggers as well: the STREAM_BUFFER trigger is ideal for this purpose. The STREAM_BUFFER trigger will go off every time the buffer changes state, meaning that it goes off whenever it fills, empties, goes into "unstable" mode or "stable" mode. Effectively, MistServer will let you know when the stream goes online and offline, but also when the stream settings aren't ideal for the user's connection and when they go back to being good again. All in real-time! Simply set up the trigger and store the user's stream status into your own local database to keep track. Check MistServer's manual (under "Integration", subchapter "Triggers") for the documentation of this trigger.

Access control

Now, you may not want every stream accessible for every user. Limiting this access in any way, is a concept usually referred to as "access control". My colleague Carina already wrote an excellent blog post on this subject last year, and I suggest you give it a read for more on how to set up access control with MistServer.

Recording streams and playing them back later

The last piece of the puzzle: recording and Video on Demand. To record streams, you can use our push functionality. This sounds a little out of place, until you wrap your head around the idea that MistServer considers recording to be a "push to file". A neat little trick is that configuring an automatic push for the stream "stream+" will automatically activate this push for every single wildcard instance of the stream "stream"! Combined with our support for text replacements (detailed in the manual in the chapter 'Target URLs and settings'), you can have this automatically record to file. For example, a nice target URL could be: /mnt/recordings/$wildcard/$datetime.mkv. That URL will sort recordings into folders per username and name the files after the date and time the stream started. This example records in Matroska (MKV) format (more on that format in my next blog post, by the way!), but you could also record in FLV or TS format simply by changing the extension.

If you want to know when a recording has finished, how long it is, and what filename it has... you guessed it, we have a trigger for that purpose too. Specifically, RECORDING_END. This trigger fires off whenever a recording finishes writing to file, and the payload contains all relevant details on the new recording. As with the previous triggers, the manual, under "Integration", subchapter "Triggers", has all relevant details.

There is nothing special you need to do to make the recordings playable through MistServer as well - they can simply be set up like any other VoD stream. But ideally, you'll want to use something similar to what was described in another of our blogposts last year, on how to efficiently access a large media library. Give it a read here, if you're interested.

In conclusion

Hopefully that was all you needed to get started with using MistServer for social streaming! As always, contact us if you have any questions or feedback!
]]></description>
  <link>https://news.mistserver.org/news/90/How+to+build+a+Twitch-alike+service+with+MistServer</link>
  <guid>https://news.mistserver.org/news/90/How+to+build+a+Twitch-alike+service+with+MistServer</guid>
  <pubDate>Thu, 23 Aug 2018 13:37:33 +0100</pubDate>
</item>
<item>
  <title>Generating a live test stream from a server using command line</title>
  <description><![CDATA[Hello everyone! Today I wanted to talk about testing live streams. As you will probably have guessed: in order to truly test a live stream you'll need to be able to give it a live input. 
In some cases that might be a bit of a challenge, especially if you only have shell access and no live input available. It's for those situations that we've got a script that uses ffmpeg to generate a live feed which we call videogen. The script itself is made for Linux servers, but you could take the ffmpeg command line and use it for any server able to run ffmpeg.

What is videogen

Videogen is a simple generated live stream for testing live input/playback without the need for an actual live source somewhere. It is built on some of the examples available at the ffmpeg wiki site. It looks like this:



Requirements


Knowledge of how to start and use a terminal in Linux
Simple knowledge of BASH scripts
ffmpeg installed or usable for the terminal
gstreamer installed if you prefer gstreamer


ffmpeg

As you might've suspected, in order to use videogen you'll need ffmpeg. Make sure to have it installed or have the binaries available in order to run videogen. By now ffmpeg is so popular almost every Linux distro will have an official package for it.

Gstreamer

As an alternative to ffmpeg you can also use our Gstreamer script to generate a live test feed. In practise it's nearly the same as videogen, though because we made this one slightly harder to encode it's easier to get higher bitrates with.

Steps in this guide


Installing videogen
Using videogen
Additional parameters for videogen
Using a multibitrate version for videogen
Using a videogen directly in MistServer
Gstreamer alternative
Using gvideogen


1. Installing videogen

Place the videogen file in your /usr/local/bin directory, or make your own videogen by pasting this code in a file and making it executable:

 #!/bin/bash

 ffmpeg -re -f lavfi -i "aevalsrc=if(eq(floor(t)\,ld(2))\,st(0\,random(4)*3000+1000))\;st(2\,floor(t)+1)\;st(1\,mod(t\,1))\;(0.6*sin(1*ld(0)*ld(1))+0.4*sin(2*ld(0)*ld(1)))*exp(-4*ld(1)) [out1]; testsrc=s=800x600,drawtext=borderw=5:fontcolor=white:fontsize=30:text='%{localtime}/%{pts\:hms}':x=\(w-text_w\)/2:y=\(h-text_h-line_h\)/2 [out0]" \
 -acodec aac -vcodec h264 -strict -2 -pix_fmt yuv420p -profile:v baseline -level 3.0 \
 $@


2. Using Videogen

Videogen is rather easy to use, but it does require some manual input as you need to specify the output, but you can specify any of the codecs inside as well incase you want/need to use something else than our default settings.

The only required manual input is the type of output you want and the output URL (or file). For MistServer your output options are:

RTMP

 videogen -f flv rtmp://ADDRESS/APPLICATION/STREAM_NAME


RTSP

 videogen -f rtsp rtsp://ADDRESS:PORT/STREAM_NAME


TS Unicast

 videogen -f mpegts udp://ADDRESS:PORT


TS Multicast

 videogen -f mpegts udp://MULTICASTADDRESS:PORT


SRT

 videogen -f mpegts srt://ADDRESS:PORT?streamid=STREAM_NAME


As it's all run locally it doesn't really matter which protocol you'll be using except for one point. RTMP cannot handle multi bitrate using this method, so if you want to create a multi bitrate videogen you'll usually want to use TS.

3. Additional parameters

You'll have access to any of the additional parameters that ffmpeg provides for both video and audio encoding simply by just adding them after the videogen command. Ffmpeg handles the last given parameters if they overwrite previously given parameters. For all the ffmpeg parameters we recommend checking the ffmpeg documentation for codecs, video and audio.

Some of the parameters we tend to use more often are:

-g NUMBER

This determines when keyframes show up. This sets the amount of frames to pass before inserting a keyframe. When set to 25 you'll get one keyframe per second, as videogen runs at 25fps.

-s RESOLUTIONxRESOLUION

This changes the resolution. The default of videogen is 800x600, so setting this to 1920x1080 will make it a "HD" stream, though the quality is barely noticeable with this script. We tend to use screen resolutions to verify a track is working correctly.

-c:v hevc or -c:v h264

This changes the video codec. The default is h264 baseline profile of 3.0, which should be compatible with any modern device. Changing the codec to H265 (HEVC) or "default" h264 changes things and might be exactly what you want to find out. Do note that HEVC cannot work over RTMP, use RTSP or TS instead!

-c:a mp3 -ar 44100

This changes the audio codec. The default is aac, so knowing how to set mp3 instead can be handy. Just be sure to add an audio rate as MP3 tends to bug out when it's not set. We tend to use 44100 as most devices will work with this audio rate.

4. Multibitrate videogen

Obviously you would want to try out a multi bitrate videogen as well, which you can do but will want to use TS for instead of RTMP as RTMP cannot handle multi bitrate through a single stream as push input. 

You can find our multi bitrate videogen here.

You can also make an executable file with the following command in it:

 #!/bin/bash

 #multibitrate videogen stuff if you want to edit qualities or codecs edit the parameters per track profile. If you want to add qualities just be sure to map it first (as audio or video depending on what kind of track you want to add). Videotracks will generally need the -pix_fmt yuv420p in order to work with this script.

 exec ffmpeg -hide_banner -re -f lavfi -i "aevalsrc=if(eq(floor(t)\,ld(2))\,st(0\,random(4)*3000+1000))\;st(2\,floor(t)+1)\;st(1\,mod(t\,1))\;(0.6*sin(1*ld(0)*ld(1))+0.4*sin(2*ld(0)*ld(1)))*exp(-4*ld(1)) [out1]; testsrc=s=800x600,drawtext=borderw=5:fontcolor=white:fontsize=30:text='%{localtime}/%{pts\:hms}':x=\(w-text_w\)/2:y=\(h-text_h-line_h\)/2 [out0]" \
 -map a:0 -c:a:0 aac -strict -2 \
 -map a:0 -c:a:1 mp3 -ar:a:1 44100 -ac:a:1 1 \
 -map v:0 -c:v:0 h264 -pix_fmt yuv420p -profile:v:0 baseline -level 3.0 -s:v:0 800x600 -g:v:0 25 \
 -map v:0 -c:v:1 h264 -pix_fmt yuv420p -profile:v:1 baseline -level 3.0 -s:v:1 1920x1080 -g:v:1 25  \
 -map v:0 -c:v:2 hevc -pix_fmt yuv420p -s:v:2 1920x1080 -g:v:2 25 \
 $@ 


This will create a multi bitrate video stream with aac and mp3 audio and a 800x600, 1920x1080 h264 video stream and a single 1920x1080 h265 (HEVC) stream. That should cover "most" multi bitrate needs.

You will always want to combine this with the ts output for ffmpeg, so using it will come down to:

 multivideogen -f mpegts udp://ADDRESS:PORT


5. Using videogen or multivideogen with MistServer directly

Of course you can also use videogen or multivideogen without a console, you will still have to put the scripts on your server (preferably the /usr/local/bin folder) however.

To use them together with MistServer just use ts-exec and the mpegts output of ffmpeg like this:

MistServer source: 

 ts-exec:videogen -f mpegts -
 ts-exec:multivideogen -f mpegts -




You can put the streams on always on to have a continuous live stream or leave them on default settings and only start the live stream when you need it. Keep in mind that as long as they're active they will use CPU.

6. Gstreamer method

6.1 RTMP



Gstreamer can provide a live stream as well and it helps having another method than just ffmpeg. There is no big benefit to Gstreamer vs ffmpeg for this use-case, so we would recommend to go with what you're familiar with.

To use Gstreamer you need to install the gvideogen file in your /usr/bin/ or make your own videogen by pasting this code in a file and making it executable:

 #!/bin/bash
 RTMP_ARG="${1}"
 BITRATE="${2:-5000}"

 GFONT='font-desc="Consolas, 20"'
 GTEXT="timeoverlay halignment=center valignment=bottom text=\"Active for:\" $GFONT ! clockoverlay halignment=center valignment=top text=\"NTP time:\" $GFONT"
 GVIDCONF="video/x-raw,height=1080,width=1920,framerate=25/1,format=I420"
 TESTIMG="videotestsrc horizontal-speed=5 ! $GVIDCONF ! mix. \
 videotestsrc pattern=zone-plate kx2=7 ky2=4 kt=15 ! $GVIDCONF ! mix. \
 videomixer name=mix sink_0::alpha=1 sink_1::alpha=0.3 ! $GTEXT ! videoconvert"
 TESTAUD="audiotestsrc wave=ticks ! faac"

 gst-launch-1.0 -q $TESTIMG ! x264enc pass=qual speed-preset=ultrafast bframes=0 quantizer=25 key-int-max=125 bitrate="${BITRATE}" ! mux. $TESTAUD ! mux. flvmux name=mux ! rtmpsink location="${RTMP_ARG}"


The usage is simple, simply call upon the script then add the push target and optionally add the target bitrate. That's all!

Example:

./gstreamscript rtmp://mistserveraddress/live/stream_name 2000 

6.2 SRT

The method to send SRT differs slightly, but enough that it's best just make an additional script. The idea is the same as RTMP, simply fill in the SRT address after calling upon the script. 

#!/bin/bash
URL="${1}"
BITRATE="${2:-5000}"

GFONT='font-desc="Consolas, 20"'
GTEXT="timeoverlay halignment=center valignment=bottom text=\"Active for:\" $GFONT ! clockoverlay halignment=center valignment=top text=\"NTP time:\" $GFONT"
GVIDCONF="video/x-raw,height=1080,width=1920,framerate=25/1,format=I420"
TESTIMG="videotestsrc horizontal-speed=5 ! $GVIDCONF ! mix. \
videotestsrc pattern=zone-plate kx2=7 ky2=4 kt=15 ! 
$GVIDCONF ! mix. \
videomixer name=mix sink_0::alpha=1 sink_1::alpha=0.3 ! $GTEXT ! videoconvert"
TESTAUD="audiotestsrc wave=ticks ! faac"

gst-launch-1.0 -q $TESTIMG ! x264enc pass=qual speed-preset=ultrafast bframes=0 quantizer=25 key-int-max=25 bitrate="${BITRATE}" ! mux. $TESTAUD ! mux. mpegtsmux name=mux ! srtsink uri="${URL}"


Example:

./gstreamsrtscript srt://mistserveraddress:8889?streamid=stream_name 2000

7. Using gvideogen

The usage is like videogen, but if you want to do something else than RTMP you will need to make quite some edits to the pipeline. We will keep things simple for this one and stay on RTMP. Let us know if you're interested in a deeper dive for Gstreamer and we will update the article or create a new one.

gvideogen defaults to 5000kbps if the bitrate is not set.

RTMP

Use the gvideogen as is with the following command:

 gvideogen rtmp://server/live/streamname (bitrate_in_kbps)


8. Testfiles

We've created a megamix which you can use to verify both track switching and retention of track information with. You can also loop this into MistServer instead of running a script. Keep in mind that any time you use a VOD stream browsers tend to keep that in cache, so when it loops you might be looking at the cache instead of what comes out of your server.

On request we have added several high resolution streams for our users to help check out their server limits. Consider using most of these files only on local networks as they would fail over most internet connections.

Megamix

downloadlink (95 Mb)

This is a 5min long video of several qualities with several languages (Dutch, German, English).
Qualities come in:

 1920p H264(constrained baseline) Dutch, German, English
 720p H264(constrained baseline) Dutch, German, English
 480p H264(constrained baseline) Dutch, German, English
 Mono AAC 44100hz Dutch, German, English


The goal of this file is to test multiple tracks and meta information.

148mbps

downloadlink (2.0 Gb)

This is a 1min 50sec video of 148mbps. This is meant to test how your system deals with high bitrate streams. Keep in mind that the bitrate of this video is actually higher than most internet connections would be able to handle, so expect playback to fail.
Qualities come in:

 3000x3000 H264(constrained baseline)
 Mono AAC 44100hz


62mbps

downloadlink (1.1 Gb)

This is a 2min 20sec video of 62mbps. This stream is also meant to test how your system deals with high bitrate streams, only in a slightly more normal resolution format. Expect playback to fail for most connections.

Qualities come in:

 2560x1440 H264(constrained baseline)
 Mono AAC 44100hz


17mbps

downloadlink (124 Mb)

This is a 1min video of 17mbps. This stream is meant to test high bitrate limits of your system when it comes to a "normal" HDMI format.

Qualities come in:

 1920p H264(constrained baseline)
 Mono AAC 44100hz

]]></description>
  <link>https://news.mistserver.org/news/88/Generating+a+live+test+stream+from+a+server+using+command+line</link>
  <guid>https://news.mistserver.org/news/88/Generating+a+live+test+stream+from+a+server+using+command+line</guid>
  <pubDate>Tue, 05 Jun 2018 12:00:37 +0100</pubDate>
</item>
<item>
  <title>Repushing to social media and streaming services using MistServer</title>
  <description><![CDATA[Hey everyone, Balder here. Today I wanted to talk about repushing to social media like Youtube and Facebook or streaming services like Twitch and Picarto using MistServer. Why would you want to put a media server in the middle of this, what would the benefit be and how do you set it up? Let's find out shall we.

When do you want to use MistServer to repush?
There could be multiple reasons why you would want to use MistServer to push your live stream to your social media or streaming service, but the most likely answer will be that you're pushing to multiple platforms at the same time. 
You will want to use MistServer once you stream to more than one platform as it saves your local PC the trouble of creating several live streams at the same time and sending them out. Besides needing less resources (bandwidth, CPU and RAM) because you'll only have one stream outgoing, MistServers unique buffering feature allows it to reach all the stream targets from the same buffer. Your viewers will get the exact same stream no matter their preferred platform of watching.

Other reasons to put MistServer in between would be using MistServer to record or using MistServer's input to transcode the stream before sending it to your targets. Those cases are usually best handled on your first encoding machine unless your machine is having trouble keeping up to real-time when doing everything at the same time.

Repushing a stream through MistServer
Repushing through MistServer unsurprisingly goes through the "Push" panel. There's two options: push or automatic push. In order to push a stream using MistServer you will need to have a stream created, so be sure to make your live stream first.

Push starts a push of your chosen stream immediately and once it's done it will stop and remove itself. Activating a push on a stream that is offline will close again almost immediately so don't use this option on streams that aren't active.

Automatic push will do a push of your chosen stream everytime it becomes active, so it will wait until you start pushing and immediately pushes the stream to the given target as well under its default settings. You can also set it up to wait a few seconds and then start pushing, but that's generally not something you would use for live streaming. In general it's the option you want when you want to automatically push towards your other platforms from a single point.

Once you've chosen your method you just need to fill in the stream name used within MistServer and the target to push to.

An example of streaming to Picarto would be:
Stream name: {mylivestream}
Target: rtmp://live.eu-west1.picarto.tv/golive/{my_picarto_stream_key}

As you can see it'll be as easy as knowing the push target and if you want to push to multiple platforms at the same time just add another one. I'll leave a short explanation for Youtube, Facebook, Twitch and Picarto below.

Pay attention to the "?" symbol
MistServer assumes that the last "?" symbol will be used to include additional parameters like track selection or push mode. If your given stream target has a question mark included in the path, make sure you add an additional "?" at the end of the given stream target. The question mark itself will be removed don't worry.

YouTube
Youtube has everything you need in your live dashboard overview. You can get at the right panel by going to your creator studio and selecting live streaming and stream now. At the bottom of that page will be the RTMP url and your stream key. Just use those in MistServer and you should be good.

Your stream target will be something like:
rtmp://a.rtmp.youtube.com/live2/{youtube_stream_name_key}

Facebook
Facebook finally has a permanent stream key so that makes things easier; there's no longer a need to prepare a stream key before going live. You'll need to go to the facebook live page and tick the box for a permanent key. After that just fill in the URL and stream key, but do note that stream keys from Facebook always contain a "?" symbol, so you need to add one at the end in order to work properly with MistServer.

Your stream target will be something like:
rtmp://live-api.facebook.com:80/rtmp/{stream_key}?

Twitch
Streaming to Twitch requires some look up, but once set it'll work until you change your settings so that's pretty great as it allows you to use Automatic pushes. You'll need to look up two things: The recommended Twitch ingest server and your stream key.

Recommended twitch ingest server, this is most easily found here: Twitch ingest servers just pick one that works.

Stream key: This is found in your dashboard, never share it with anyone as they can hijack your channel with this. Obviously it'll be safe to fill into your stream target in MistServer, but make sure no one is watching while you're setting this up.

Once done your stream should look something like this:
rtmp://live-ams.twitch.tv/app/{stream_key}

The Amsterdam Twitch server has been chosen here as I'm located in The Netherlands, the {stream_key} part just needs to be replaced with whatever is given by Twitch at your dashboard panel. Once that's done you're all set. 

Picarto
Streaming to Picarto is a little bit easier than Twitch as a single page contains all the information you need and unless you reset your stream key you'll be done.

Once you've chosen your ingest server and stream key it should look something like this:
rtmp://live.eu-west1.picarto.tv/golive/{stream_key}

I've chosen the Europe Picarto server for this example as I'm still in The Netherlands.

That's all for this blog post, I hope it makes it clear why and when you would want to use MistServer as a repushing server for other platforms and gives you a starting point in setting it up. 
]]></description>
  <link>https://news.mistserver.org/news/87/Repushing+to+social+media+and+streaming+services+using+MistServer</link>
  <guid>https://news.mistserver.org/news/87/Repushing+to+social+media+and+streaming+services+using+MistServer</guid>
  <pubDate>Wed, 25 Apr 2018 18:21:16 +0100</pubDate>
</item>
<item>
  <title>What hardware do I need to run MistServer?</title>
  <description><![CDATA[Hey everyone!
A very common question we get is about the hardware requirements for MistServer. Like any good piece of software there is no "real" hardware requirement in order to use MistServer, but there is definitely a hardware requirement for what you want to achieve media streaming wise. 
We'll first give you the calculators and then explain all the main categories and the hows and why something was chosen.

The calculators


  Hardware requirements calculator for MistServer
  
    
      Maximum amount of simultaneous streams:
    
    
  
  
    
      Average stream bitrate:
    
    
    KBit/s
  
  
    
      Maximum amount of simultaneous viewers:
    
    
  
  
    
      Total duration of streams to be recorded:
    
    
    
      hours
    
  
  
  
    
      Safety margin:
    
    
    
      %
    
  
  
  
    Calculate
  
  
  



  Server capacity calculator for MistServer
  
    
      Amount of simultaneous streams:
    
    
  
  
    
      Average stream bitrate:
    
    
    KBit/s
  
  
  
    
      CPU:
    
    
    
      passmark score
      ?
    
  
  
    
      RAM:
    
    
    GB
  
  
    
      Bandwidth:
    
    
    GBit/s
  
  
    
      Storage:
    
    
    GB
  
  
  
    Calculate
  
  
  



  form.calculator {
    border: 1px solid rgba(0,0,0,0.5);
    padding: 1em 0.5em;
    margin: 1em 0;
  }
  form.calculator label &gt; span {
    width: 20em;
  }
  form.calculator span.unit {
    width: auto;
    min-width: 0;
    opacity: 0.8;
    font-size: 0.8em;
    font-weight: normal;
  }
  form.calculator input {
    width: 10em;
  }
  form.calculator table tr {
    vertical-align: top;
  }



  function r(value,digits) {
    if (!digits) { digits = 0; }
    var d = Math.pow(10,digits);
    return Math.round(value*d)/d;
  }
  function buildRow(cells,type) {
    if (!type) { type = "td"; }
    var tr = document.createElement("tr");
    for (var i in cells) {
      var td = document.createElement(type);
      td.innerHTML = cells[i];
      tr.appendChild(td);
    }
    return tr;
  }
  
  var form = document.getElementById("calculator");
  var result = document.getElementById("result");
  
  function calculate_hardware() {
    function v(label) {
      return Number(form.elements[label].value);
    }
    
    result.innerHTML = "Calculated hardware requirements:";
    
    var RAM       = 12.5 * v("bitrate") / 8 / 1024 / 1024 * v("streams") + 2 / 1024 * v("viewers");
    var CPU       = 3.4 * (v("viewers") + v("streams"));
    var bandwidth = v("bitrate") / 1024 / 1024 * (v("streams") + v("viewers"));
    var storage   = v("bitrate") / 8 / 1024 / 1024 * v("record_duration") * 3600;
    var sf = v("safety") / 100 + 1;
    
    var t = document.createElement("table");
    t.appendChild(buildRow([
      "",
      "Estimated use:",
      "Recommended:(includes safety margin)"
    ],"th"));
    t.appendChild(buildRow([
      "CPU:",
      r(CPU)+" passmark score ?",
      r(sf*CPU)+" passmark score ?",
    ]));
    t.appendChild(buildRow([
      "RAM:",
      r(RAM,2)+" GB",
      r(sf*RAM,2)+" GB"
    ]));
    t.appendChild(buildRow([
      "Bandwidth:",
      r(bandwidth,2)+" GBit/s",
      r(sf*bandwidth,2)+" GBit/s"
    ]));
    t.appendChild(buildRow([
      "Storage:",
      r(storage)+" GB",
      r(sf*storage)+" GB"
    ]));
    
    t.className = "nostyle";
    t.style.marginLeft = 0;
    result.appendChild(t);
  }
  
  var form2 = document.getElementById("calculator2");
  var result2 = document.getElementById("result2");
  
  function calculate_viewers() {
    function v(label) {
      return Number(form2.elements[label].value);
    }
    
    result2.innerHTML = "Calculated server capacity:";
    
    var RAM = (v("RAM") - 12.5 * v("bitrate") / 8 / 1024 / 1024 * v("streams")) / 2 * 1024;
    var CPU = v("CPU") / 3.4 - v("streams");
    var bandwidth = v("bandwidth") * 1024 * 1024 / v("bitrate") - v("streams");
    
    var lowest = Math.min(RAM,CPU,bandwidth);
    
    var storage = v("storage") / v("bitrate") * 8 * 1024 * 1024 / 3600;
    
    var t = document.createElement("table");
    t.appendChild(buildRow([
      "CPU:",
      r(CPU)+" viewers"
    ],(CPU == lowest ? "th" : false)));
    t.appendChild(buildRow([
      "RAM:",
      r(RAM)+" viewers"
    ],(RAM == lowest ? "th" : false)));
    t.appendChild(buildRow([
      "Bandwidth:",
      r(bandwidth)+" viewers"
    ],(bandwidth == lowest ? "th" : false)));
    t.appendChild(buildRow([
      "Storage:",
      r(storage)+" hours"
    ]));
    
    t.className = "nostyle";
    t.style.marginLeft = 0;
    result2.appendChild(t);
  }
  


How do I decide on the hardware?

We tend to divide the hardware in 4 necessary categories: CPU, RAM, bandwidth and storage. Each category is important, but depending on your streaming media needs one might be more important than the other.

Processor (CPU)

The CPU is obviously important as it handles all the calculations/requests in your server. Luckily MistServer itself is not that demanding on your system. If you are running other applications aside from MistServer they will probably be more important for your processor choice than MistServer. As MistServer is heavily multi processed it does benefit from processors that can handle more threads or have more cores. 

To give you something tangible: if you go to the cpubenchmark mega list every 3.4 points on CPU mark equals one viewer. For example the Intel Xeon E5-2679 v4 @ 2.50GHz comes with a CPU mark of 25236 and will be able to handle 7010 viewers at the same time.

Memory (RAM)

Memory gets some heavier use for media servers as memory is often used for temporary files such as stream data itself. This means it is often used for both incoming and outgoing streams and the required memory can raise quite rapidly. MistServer tries to get a handle on this by sharing the stream data between inputs, outputs and even different protocols when possible. Still, safest is to calculate the necessary memory for the absolute worst case scenario where memory cannot be shared at all!

To calculate the memory use in the worst case scenario when using MistServer you will require memory per viewer and the amount depends on the quality of your stream. MistServer needs roughly 12.5MB for every megabit of incoming stream bandwidth under the default MistServer settings. So obviously the more streams or stream tracks the more memory you need. On top of this comes a constant 2MB of memory necessary per active connection (either direction). 

So if I assume 50 incoming streams of 2mbps and 600 viewers I will need: 12.5×2×50 = 725 MB + 2×650 = 2025MB. So roughly 2GB, now I would recommend a safety margin of at least 10% so going with at least 2.2GB would be wise.

Bandwidth (network)

Bandwidth is often the main bottleneck when it comes to streaming media, especially when higher qualities like 4K are used. Bandwidth is simply the amount of traffic your server can handle before the network connection is saturated. Once your network gets saturated it will mean users will have to wait for their data which often leads to a very bad viewer experience when it comes to media. So it is definitely one of the main things to avoid and thus necessary to calculate what you can handle.

Luckily this is quite easy to calculate, all you need is to know the stream quality and multiply this by every connection (both incoming and outgoing) for every stream you have or plan to have and add it together. Do note that even if a stream quality or stream itself is not viewed the incoming connection will still use up network bandwidth if it is pushed from an outside source, so do not neglect those streams.

For example if I got 6 streams, one of 1mbps, two of 2mbps and 2 of 5mbps with 50 viewers on the 1mbps, 300 viewers on 2mbps and 150 viewers on 5mbps I will need to be able to handle: 1mbps×(50+1)+2mbps×(300+2)+5mbps×(150+3) = 1568mbps. As you can see especially higher quality streams can cause this to raise rather fast, which is usually why a CDN or load balancer is used.

Storage (disk)

Storage is usually more easily understood, all you need is enough space to fit whatever streams you want to provide on demand or record. Especially with the price of storage compared to the other hardware requirements people tend to go a bit too far with their storage. It cannot hurt to have more though. 

Storage is easily calculated, all you need to do is multiply the stream quality by the duration for every stream you have. The only thing you will want to pay attention to is that stream qualities are measured in bits while storage is measured in bytes. There are 8 bits in a byte, so the storage necessary is 8 times less than the bandwidth×duration.

Following the example of bandwidth if I got the same 6 streams, one of 1mbps and two of 2mbps and 5mbps and would want to record those all for 20 minutes I would need:
(1×20×60+2×2×20×60+5×2×20×60) / 8 = 2250MB. So you would need a little over 2GB. 

Calculate your own hardware requirements

Simply use the calculators at the start of this post and you should be good to go. Of course if You'd rather caculate manually use the formulas below.

Formulas to manually calculate

If you'd rather calculate by hand that's possible too. Just use the following formulas:


CPU: 3.4×connections(viewers + incoming streams) = necessary cpubenchmark score
Memory: 12.5×Stream_mbps×Streams_total + 2*Viewers = MB RAM
Bandwidth: Average_stream_quality_mbps×(Input_streams + Viewers) = Bandwidth in Mbit
Storage: Total_duration_of_recordings×Average_stream_quality_mbps / 8 = Storage in MByte


As a reminder the steps between kilo, mega and giga are 1024 not 1000 when we're measuring bits or bytes. So make sure you use 1024 when changing between the values or you will have a calculation error.

Well that was it for this blog post, I hope it helped you understand what kind of hardware you will need to search for when using MistServer. Our next blogpost will be done by Erik and will handle stream encryption.
]]></description>
  <link>https://news.mistserver.org/news/85/What+hardware+do+I+need+to+run+MistServer%3F</link>
  <guid>https://news.mistserver.org/news/85/What+hardware+do+I+need+to+run+MistServer%3F</guid>
  <pubDate>Thu, 01 Mar 2018 09:51:42 +0100</pubDate>
</item>
<item>
  <title>Setting up Analytics through Prometheus and Grafana</title>
  <description><![CDATA[Hey Everyone! Balder here, this time I wanted to talk about using Prometheus and Grafana to set up analytics collection within MistServer. There’s actually quite a lot of statistics available and while we do tend to help our Enterprise customers to set this up it’s actually available for our non-commercial users as well and easily set up too. 

Best practises for setting up your analytics server

As you might have guessed using Prometheus and Grafana will require some resources and we recommend running it on a different device than you are running MistServer on. This is for a few reasons, but the most important being that you would want your analytics collection to keep going if your MistServer instance goes dark for some reason or has run into trouble.

As such we would recommend setting up a server whose sole focus is to get the analytics from your MistServer instances. It can be any kind of server, just make sure it has access to all your MistServer instances.

Operating system choice

While this can be run in almost every operating system, a clear winner is Linux.

Under Linux both Prometheus and Grafana work with little effort and will become available as a service with their default installs. Mac comes in second as Prometheus works without too much trouble, but Grafana requires you to use the homebrew package manager for MacOS. Windows comes in last as I couldn’t get the binaries to work without a Linux simulator like Cygwin. 

Installing Prometheus and Grafana

Linux

Installing both Prometheus and Grafana under linux is quite easy as they're both quite popular. There's a good chance they're both available as a standard package immediately. If not I recommend checking their websites to see how the installation would go for your Linux Operating System of choice.

Starting them once installed is done through your service system which is either:

systemctl start grafana.service

or

service grafana start

depending on your Operating system.

MacOS

Installing Prometheus can be done rather easy. The website provides Darwin binaries that should work on your Mac. It can also be installed through Homebrew which we will be using for Grafana. Which method you use is up to you, but I prefer to work with the binaries as it made using the configuration file easier for me.

Install Homebrew as instructed on their website.

Then use the following commands in a terminal:

brew update
brew install prometheus


Installing it as a service would be preferred, but I would recommend leaving that until after you've set everything up.

Installing Grafana can also be done through Homebrew. The Grafana website offers some excellent steps to follow in order to install it properly.

For Prometheus you will have to make your own service to have it automatically start on boot. Installing Grafana through Homebrew will make it available as a service through Homebrew.

Windows

Both Prometheus and Grafana offer Windows binaries, however I could not get them to work natively in Windows 10. They did instantly work when I tried running them in the Cygwin terminal. 

Because of the added difficulty here I would just run them both in a Cygwin terminal and be done with it, though you could try to run them as a system service. The combination of Cygwin and Windows Services tend to cause odd behaviour however, so I can't exactly recommend it.

Setting up Prometheus and Grafana

01: Editing the Prometheus settings file

This is done by editing prometheus.yml, which may be stored on various locations. You will either find it in the folder you've unpacked, or when installed in Linux, at /etc/prometheus/prometheus.yml or /etc/prometheus.yml

You need to add the following to the scrape_configs:

scrape_configs:
  - job_name: 'mist'
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: '/PASSPHRASE'
    static_configs:
      - targets: ['HOST:4242']


To add multiple MistServers just keep adding targets with their respective HOST.

An example minimal prometheus.yml would be:

scrape_configs:
  - job_name: 'mist'
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: '/PASSPHRASE'
    static_configs:
      - targets: ['HOST01:4242', 'HOST02:4242', 'HOST03:4242']


We did notice that if there's a bad connection between your analytics server and a MistServer instance the scrape_timeout of 10 seconds could be too short and no data will be received. Setting a higher value for the scrape time could help in this scenario.

You can check if this all worked by checking out http://HOST:9090 at the machine you've set this up after you've started Prometheus. Within the Prometheus interface at Status → Targetsyou can inspect whether Prometheus can find all the MistServer instances you've included in your settings.

02: Starting Prometheus

For Linux

systemctl start prometheus.service

or

service prometheus start

or

Use a terminal to go to the folder where you have unpacked Prometheus and use:

./prometheus --config.file=prometheus.yml

For MacOS

Use a terminal and browse to the folder where you have unpacked Prometheus. Then use:

./prometheus --config.file=prometheus.yml 

For Windows

Use a command window to browse to the folder where you have unpacked Prometheus. Then use:

prometheus.exe --config.file=prometheus.yml

03: Setting up Grafana

Through your installation method Grafana should be active and available as a service, or if you are using Windows you will need to boot Grafana by starting grafana-server.exe.

Once active Grafana will have an interface available at http://HOST:3000 by default. Open this in a browser and get started on setting up Grafana. 

Adding a data source

The next step is to add a data source. As we're running Grafana and Prometheus in the same location, this is quite easy. All we need to set is the Name, Type and URL all other settings will be fine by default.




Name can be anything you'd want.
Type has to be set to: Prometheus
URL will be the location of the Prometheus interface: http://localhost:9090


Add those and you're ready for the next step.

Adding the dashboard

We've got a few Dashboards available immediately which should give the most basic things you'd want. You can add a dashboard by following these steps:

Click on the grafana icon in the top left corner → hover Dashboards → SelectImport`. 

You should see the following


Fill in the Grafana.com Dashboard number with our preset dashboards (for example our MistServer Vitals: 1096)

If recognised you will see the following


Just add that and you should have your first basic dashboard. Our other dashboards can be added in the same manner. More information about what each dashboard is for can be found below.

MistServer provided dashboards

All of the dashboards can be found here on Grafana Labs as well.

MistServer Vitals: 1096



This is our most basic overview which includes pretty much all of the statistics you should want to see anyway. It covers how your server is doing resource and bandwidth wise.

You switch between given MistServers at the top of given panels by clicking and selecting the server you want to inspect.

MistServer Stream Details: 4526



This shows generic details per active stream. Streams and Servers are selected at the top of the panel. You'll be able to see the amount of viewers, total bandwidth use and amount of log messages generated by the stream.

MistServer All Streams Details: 4529



This shows the same details as the MistServer Stream Details Dashboard, but for all streams at the same time. This can be quite a lot of data, and will become unusable if you have a lot of streams. If you have a low amount of streams per server this gives an easy to use overview however.

Well that's it for this blogpost, I hope it's enough to get most of you started on using Prometheus and Grafana in combination with MistServer.
]]></description>
  <link>https://news.mistserver.org/news/83/Setting+up+Analytics+through+Prometheus+and+Grafana</link>
  <guid>https://news.mistserver.org/news/83/Setting+up+Analytics+through+Prometheus+and+Grafana</guid>
  <pubDate>Thu, 01 Feb 2018 13:37:51 +0100</pubDate>
</item>
<item>
  <title>Metadata format</title>
  <description><![CDATA[Hello readers, this is Erik, and today we are going to be diving in-depth into some important updates we have been making to our internal metadata systems and communication handling.

Over the last couple of years "low latency" streaming has become more and more important, with viewers no longer accepting long buffering times or being delayed in their stream in any way. To achieve this all processes in your ecosystem will need to be able to work with the lowest latency possible, and having a media server that aids in this aspect is a large step in the right direction.

With this in mind we have been working on creating a new internal format for storing metadata, that allows multiple processes to read while having a single source process generate and register the incoming data. By doing this directly in memory we can now bring our internal latency down to 2 frames direct throughput, and this post is an overview of how we do this.

Communication in a modular system

Because MistServer is a multi-process environment - a separate binary is started for each and every viewer - efficiency is mostly dependent on the amount of overhead induced by the communication between the various processes. Our very first version used a connection between each output and a corresponding input, which has been replaced a couple of years ago by a technique called shared memory.

Shared memory is a technique where multiple processes - distributed over any number of executables - can access the same block of memory. By using this technique to distribute data, all source processes need to only write their data once, allowing any output process to read it simultaneously.

The main delaying factor in its current implementation, is that the metadata for a live stream only gets written to memory every second. As all output processes read once per second as well, this yields a communication delay of up to 2 seconds.

For our live streams we also have the additional use case where multiple source processes can be used for a single stream in order to generate a multi bitrate output. All source processes write their own data to memory pages and a separate MistInBuffer process handles the communication, authorization and negotiation of all tracks. Next to this it will make sure DVR constraints are met, and inactive tracks get removed from the stream.

During this it will parse the data written to a page by a source process, only to regenerate the metadata that was already available in the source to begin with. This in itself adds a delay as well, and moreover it demands processing power to recalculate information that was already known.

Synchronisation locks

To make matters worse, in order to maintain an up to date view on all data, all executables involved in this system will need to 'lock' the metadata page in its entirety to make sure it is the only process with access. Though the duration of this lock is generally measured in fractions of milliseconds, having a stream with hundreds or thousands of viewers at the same time does put a strain on keeping output in realtime.

Reliable Access

For the last couple of months we have been busy with a rework of this structure to improve our metadata handling. By using the new RelAccX structure we can generate a lock-less system based on records with fixed-size data fields. 

If the field sizes do need to be changed a reload of the entire page can be forced to reset the header fields. By doing so all processes that reload the structure afterwards will be working with the new values, as these are stored in a structured header. This also allows us to add fields and maintain consistency going forward.

About latency

By using the above described structure we can assign a single page per incoming track, and make the source process write its updates to the same structure that is immediately available for all other processes as well. By setting the 'available' flag after writing all data, we can make sure that the data on the page matches the generated metadata. By doing this we have measured a latency of 2 frames end-to-end in our test environment.

In the same way we can set a record to 'unavailable' to indicate that while the corresponding data might actually still exist at the moment, it is considered an unstable part of the stream and will be removed from the buffer in the near future.

Communications

Besides having implemented this technique for the metadata, we have also upgraded the stability and speed of our internal communications. The key advantages here are that our statistics API can now give even more accurate data, and that output processes can now select any number of tracks from a source up from the previous limit of 10 simultaneous tracks - yes we have had customers reaching this limitation with their projects. 

Conclusion

By updating the way we have structured our internal communications, we have been able to nearly remove all latency from the system, as well as attaining a reduced resource usage due to not having to recalculate 'known' data. This system will be added in our next release, requiring a full restart of the system. If you have any question on how to handle the downtime generated by this, or about the new way we handle things, feel free to contact us
]]></description>
  <link>https://news.mistserver.org/news/82/Metadata+format</link>
  <guid>https://news.mistserver.org/news/82/Metadata+format</guid>
  <pubDate>Tue, 23 Jan 2018 16:00:09 +0100</pubDate>
</item>
<item>
  <title>Using MistServer through a reverse proxy</title>
  <description><![CDATA[Sometimes, it can be convenient to direct requests to MistServer through a reverse proxy, for example to limit the amount of ports that will be used, to use a prettier url or even to change the requests through the proxy.
In this blog, I'll take you through the steps to configure your web server and MistServer to work with this. For this example I'll consider the use case of having an existing website that is running on port 80, with MistServer on the same machine, and that we want MistServer's HTTP endpoint (by default that's port 8080) to be reached through http://example.com/mistserver/. 

Using a reverse proxy for MistServer does have a cost to efficiency, however.
Using this method means that instead of once, output data needs to be moved four times. Furthermore, MistServer tries to reuse connections, but the proxy can mix these connections making it unusable for MistServer.

Things to keep in mind


If you are using a different HTTP port for MistServer replace any 8080 with the port number you use for HTTP within MistServer
If you would like to change the url pathing to something else than /mistserver/ http://example/mistserver/ change the /mistserver/ part to whatever you want.
If you see example.com you are supposed to fill in your server address


Configure your web server

Note for MistServer users below version 2.18
MistServer versions 2.17 and under do not work with the "X-Mst-Path" header and will need to set their public address manually in the steps later on. For users that use MistServer 2.18 and later: You will not need to set a public address as MistServer will be able to use the "X-Mst-Path" to automatically detect and use the proxy forward. Keep in mind however that the embed code cannot be aware of the "X-Mst-Path" header, so to make things easier for yourself you will most likely want to set the public address anyway.

The first step is to configure your web server to reverse proxy to MistServer's HTTP port. 

For Apache:
Enable the mod_proxy and mod_proxy_http modules and add the following to your configuration file:

&lt;Location "/mistserver/"&gt;
  ProxyPass "ws://localhost:8080/"
  RequestHeader set X-Mst-Path "expr=%{REQUEST_SCHEME}://%{SERVER_NAME}:%{SERVER_PORT}/mistserver/"
&lt;/Location&gt;


note for Apache
Apache 2.4.10 or higher is required for this configuration to work. Websocket support in Apache versions below 2.4.47 requires adding the proxy_wstunnel module.
ProxyPass is only set to "ws://localhost:8080/" because it will default back to http if the websocket fails. So it will actually do what you want. 

For Nginx:
Add the following to your configuration's server block, or nested into your website's location block: 

location /mistserver/ {
  proxy_pass http://localhost:8080/;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_buffering off;                      
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "Upgrade";
  proxy_read_timeout 600s;
  proxy_set_header Host $host;
  proxy_set_header X-Mst-Path "$scheme://$host/mistserver";
}


For Lighttpd:
Add mod_proxy to your list of server modules, or add the following to your configuration file:

server.modules = ("mod_proxy")


Also, to disable response buffering, add:

server.stream-response-body = 2


and for the reverse proxy itself:

$HTTP["url"] =~ "(^/mistserver/)" {
  proxy.server  = ( "" =&gt; ( "" =&gt; ( "host" =&gt; "127.0.0.1", "port" =&gt; 8080 )))
  proxy.header = ( "map-urlpath" =&gt; ( "/mistserver/" =&gt; "/" ),"upgrade" =&gt; "enable")
}

Note for Lighttpd
There is no easy way to include a custom HTTP header like "X-Mst-Path". You will need to make configurations within MistServer as well to get the forwarding to work!

Configure MistServer

MistServer also outputs information about its streams, including the various urls under which the stream can be accessed. However, if reverse proxying is being used, these urls cannot be accessed externally. That can be fixed by configuring the "Public address"-setting of the HTTP protocol through the MistServer Management Interface.

While this step could be skipped if you're using Apache or Nginx we would still recommend setting it up. The MistServer interface has no way of knowing it is also available over any address within the reverse proxy. The public address is meant to be used so the interface does know and can properly use them in both the preview and embed panels.



You should now also be able to watch your stream at http://example.com/mistserver/STREAMNAME.html.

Enabling SSL

For Apache and Nginx all you need to do is enable SSL support within Nginx or Apache and you are done. Requests over HTTPS are automatically forwarded over HTTPS thanks to X-Mst-Path.

For Lighttpd You will need to enable SSL and you will also need to include the HTTPS address in the public url like in the example above.

Edit your web pages

Lastly, the urls with which streams are embedded on your webpage will need to be updated. If you are using MistServer's embed code, update any urls. There are only two places that need changing, it's the bits with STREAMNAME.html and player.js. If you have set a public address the embed code will use this address as well. Though only the first filled in public address will be used by the embed code MistServer will assume it is available on all filled in addresses.

&lt;div class="mistvideo" id="nVyzrqZSm7PJ"&gt;
  &lt;noscript&gt;
    &lt;a href="//example.com/mistserver/STREAMNAME.html" target="_blank"&gt;
      Click here to play this video
    &lt;/a&gt;
  &lt;/noscript&gt;
  &lt;script&gt;
    var a = function(){
      mistPlay("STREAMNAME",{
        target: document.getElementById("nVyzrqZSm7PJ")
      });
    };
    if (!window.mistplayers) {
      var p = document.createElement("script");
      p.src = "//example.com/mistserver/player.js"
      document.head.appendChild(p);
      p.onload = a;
    }
    else { a(); }
  &lt;/script&gt;
&lt;/div&gt;


We're using //example.com/mistserver/ here as for both HTTP and HTTPS the default ports are used. This setup allows us to use this embed code for both HTTP and HTTPS pages as it will adapt to the scheme used by the viewer. Note that we're talking about the ports the proxy forward is using, not the HTTP or HTTPS ports setup within MistServer.

If the proxy is using non-default HTTP or HTTPS ports you would have to use the full addres + port in the url, for example if HTTPS was running on 4433: https://example.com:4433/mistserver/. 

That's all folks! Your website should now access MistServer through the reversed proxy. For both HTTP and HTTPS. 
]]></description>
  <link>https://news.mistserver.org/news/81/Using+MistServer+through+a+reverse+proxy</link>
  <guid>https://news.mistserver.org/news/81/Using+MistServer+through+a+reverse+proxy</guid>
  <pubDate>Thu, 04 Jan 2018 13:31:58 +0100</pubDate>
</item>
<item>
  <title>Live streaming with Wirecast and MistServer</title>
  <description><![CDATA[Hey everyone! As Jaron mentioned I would do the next blog post. Since our blog post about OBS Studio and MistServer is quite popular I figured I would start adding other pushing applications, this time I'll talk about Telestream Wirecast.

Wirecast

Wirecast is an application meant for live streaming, their main focus is to easily allow you to create a live stream with a professional look and feel. It's a great piece of software if you want to go for a professional feel and want a piece of software that makes it easy to do so.

Basic RTMP information

This information will be very familiar to those who read how to push with OBS Studio to MistServer, so feel free to skip it.

Most popular consumer streaming applications use RTMP to send data towards their broadcast target. The most confusing part for newer users is where to put which address, mostly because the same syntax is used for both publishing and broadcasting.

Standard RTMP url syntax

rtmp://HOST:PORT/APPLICATION/STREAM_NAME

Where:


HOST: The IP address or host name of the server you are trying to reach
PORT: The port to be used; if left out it will use the default 1935 port.
APPLICATION: This is used to define which module should be used when connecting, within MistServer, this value will be ignored or used as password protection. The value must be provided, but may be empty.
STREAM_NAME: The stream name of the stream: used to match stream data to a stream id or name.


This might still be somewhat confusing, but hopefully it will become clear as you read this post. These will be the settings I will be using in the examples below.


Address of server running Wirecast: 192.168.137.37
Address of server running MistServer: 192.168.137.64
Port: Default 1935 used
Application: not used for mistserver, we use live to prevent unreadable URLs.
Stream name: livestream


Set up the stream in MistServer

Setting up the stream in MistServer is easy, just go to the stream window and add a stream. For the stream name just pick anything you like, but remember it you will need it in Wirecast later on. For the source select push://ADDRESS_OF_SERVER_RUNNING_WIRECAST. In this example I will go with:


Stream name: livestream
Source: push://192.168.137.37




Booting Wirecast

First you will enter the boot screen, here you can pick your previously saved templates or start with a new one. We will just start with a new one, so just click continue in the bottom right corner.



And you should see the start interface.



Setting up Sources

Luckily Wirecast is quite easy to set up. You add sources to your layers, sources could both be audio, video or both at the same time. For this example I'll just add a simple media file, but you could add multiple sources to multiple layers and switch between presets. It's one of the reasons to use Wirecast so I would recommend checking out all the possibilities once you've got the chance.



Setting up the output

Setting up the output can be done through the output settings menu in the top left. 



Choose a custom RTMP server when setting everything up. Most important are the Address and Stream. You will need to fill in the address of MistServer and the stream you have set up previously. We will go with the following:


Address: rtmp://192.168.137.64/live/
Stream: livestream


setting up the encoder

now Wirecast has a lot of presets, but they're all a bit heavy to my tastes. If you just want to be done fast I would recommend the 720p x264 2Mbps profile as it's the closest to what you'll need if you're a starting user and unsure what you will need. If you do know what you need or want feel free to ignore this bit. Just be aware that Wirecast tends to set not that many key frames which can drastically change latency.

If you want to tweak the settings a bit I recommend the following settings:


encoder: xh264
width: 1280
width: 720
frames per second: 30
average bitrate: 1200
quality: 3 (very fast encoding)
x264 command line option:--bframes 0
profile: high
keyframe every: (30 -)150


The rest of the settings on default.



This profile should work for most streams you will want to send over a normal internet connection without being in the way of other internet traffic.

*Edit 2021-06-24: We also recommend setting the --bframe 0 setting in the x264 command line option. Bframes can reduce the bandwidth for live streams, but some protocols have more issues with bframes in live streams which can cause weird stutters. To avoid this simply turn the bframes off.

Setting the layers to live

By pressing the go button your current stream will transition towards your preview to the left following the rules to the left of the button. Only if it's on the live preview will it be pushed out towards your chosen server, so be sure that you're happy with your live preview.



Push towards MistServer

You push by pressing the broadcast button, it's to the top left and looks a bit like a wifi button. Alternatively you could click output and select start / stop broadcasting. If it lights up green you're pushing towards MistServer and it should become available within moments, if not you will have to go through your settings as you might have made a typo.



Check if it is working in MistServer

To check if it is working in MistServer all you will have to do is press the preview button when at the stream menu. If everything is setup correctly you will see your stream appear here. If you would like to stop your live stream just stop the broadcast in Wireshark by pressing the broadcast button or the start / stop broadcasting option.



Getting the stream to your viewers

As always MistServer can help you with this. At the Embed panel (found at the streams panel or stream configurations) you can find some embed-able code or the direct stream URLs. Use the embed-able code to embed our player or use the stream codes for your own preferred player solution and you should be all set! Happy streaming!



The next blog post will kick off the 🎆new year🎆 and be made by Carina. She will write about how to combine MistServer with your webserver. 

Edit 2021-06-24: Added x264 recommendation
]]></description>
  <link>https://news.mistserver.org/news/80/Live+streaming+with+Wirecast+and+MistServer</link>
  <guid>https://news.mistserver.org/news/80/Live+streaming+with+Wirecast+and+MistServer</guid>
  <pubDate>Wed, 20 Dec 2017 09:00:53 +0100</pubDate>
</item>
</channel></rss>