News

19 Oct 2017

[News] Setting up a transcoder

Hello everyone! It's been a while since our latest post due to the IBC2017, but we're working to catch up on that again. This post will be about a question we've heard quite a bit over the last few years: "How...

Hello everyone! It's been a while since our latest post due to the IBC2017, but we're working to catch up on that again.

This post will be about a question we've heard quite a bit over the last few years: "How do we use a transcoder with MistServer?", and today I will show you two ways to do this: our trigger system, and our up-and-coming ts-exec feature.

Trigger system

Using the trigger system, we can set up a script to execute whenever a new track is added, using the STREAM_TRACK_ADD trigger. This trigger contains as a payload the name of the stream, and the id of the track that is being added. The following script could serve as a starting point, taking a stream and pushing an extra quality in 720p, 1mbit:

#!/bin/bash

HOST="localhost"
HTTPPORT=8080
LOGDIR="/tmp/"

read streamname
read tracknumber

echo -e "Trigger $1:\n${streamname} : --${tracknumber}-- ? --${vTrack}--\n" >> ${LOGDIR}/triggerLog

vTrack=`curl -s  http://${HOST}:${HTTPPORT}/info_${streamname}.js -o - | sed -n -e 's/mistvideo\[.*\] = {/{/gp' | jq .meta.tracks 2> /dev/null | grep "video_" | sed -e 's/.*_\(.*\)":.*/\1/g'`

if [ ${vTrack} == ${tracknumber} ]; then
  echo "Starting encode" >> ${LOGDIR}/triggerLog

  ffmpeg -i http://${HOST}:${HTTPPORT}/${streamname}.mp4?rate=0 -muxdelay 0 -c:v h264 -b:v 1M -s:v 1280x720 -an -f flv rtmp://${HOST}/push/${streamname} 2> ${LOGDIR}/ffmpeg_log
fi
  • Lines 3-5: Set up the parameters used further on in the script, this would need to be the host and port that MistServer is available on, as well as a local directory where the logs will be kept.
  • Lines 7-8: Read the streamname and tracknumber from standard input, where our trigger system sends this data.
  • Line 10: Write the trigger data to the logfile.
  • Line 12: Use curl and jq to retrieve the stream info from mistserver, and extract the id of the video track(s) if any.
  • Line 14: Check if the added track is the only video track.
  • Line 15: Added the 'encode start' to the logfile
  • Line 17: Start ffmpeg to push to the same stream, adding tracks to the already available data. (See our older post for a detailed explanation on how to use ffmpeg)

The logfile is used mainly for debug purposes, if you decide to use this script in production, please make sure logging is removed, or handled properly.

ts-exec

Starting with our next release (2.13), we will make things even more straightforward with our ts-exec:// feature for push outputs. This feature allows for a script or program that can receive raw ts data over standard input to be started by MistServer, and combining this with the auto push system allows to start an encode the moment a new stream becomes available.

Using this feature to adapt a stream can run your encoder whenever needed, using ffmpeg for an example gives the following push target:

ts-exec://ffmpeg -i http://localhost:8080/${wildcard}.ts -copyts -muxdelay 0 -c:v:0 h264 -b:v:0 1M -s:v:0 1280x720 -an -f flv rtmp://localhost/push/${wildcard} 

That's all there really is to it. Ofcourse, starting multiple encodes to create a multibitrate stream, or coupling in a different encoder will require some tweaks. For more details and questions, feel free to contact us.

See you all next time, our next blog will come up in a couple of days.

read moreless
6 Sep 2017

[Blog] An introduction to OTT: What is OTT anyway?

Hello everyone, IBC2017 is around the corner and with it comes a lot of preparation, whether it is familiarizing yourself with the layout, setting up your schedule or even double checking requirements/specifications of that special something you'll need to add...

Hello everyone,

IBC2017 is around the corner and with it comes a lot of preparation, whether it is familiarizing yourself with the layout, setting up your schedule or even double checking requirements/specifications of that special something you'll need to add to make your project complete. One of the most troubling aspects of the OTT branch is that it is loaded with jargon and covers a tremendously broad spectrum of activities possible which makes it easy to confuse things. With all the possible uses of OTT we thought it might be a good idea to discuss the basics of any OTT project. Luckily in an attempt to make the core basics of OTT a bit more clear our CTO Jaron gave a short presentation covering this at last year's IBC2016. Below is a transcript of his presentation.

slides

SLIDE1

What is OTT anyway?

OTT literally stands for Over The Top, which doesn't really tell you much about what it actually is. so to clarify that, it’s anything that is not delivered through traditional cable or receivers, so non traditionally television. To put it simply: video over the internet.

Now I’m going to be using the word media throughout this presentation instead of video as it could also be audio data, metadata or other data similar to that.

To add to that, Internet protocols, set top boxes, HBBTV and similar solutions are also technically examples of OTT even though they are associated with traditionally cable provides and may use a traditional cable receiver to provide it.

We will generalize this by saying OTT is internet based delivery.

SLIDE2

Some of the important topics are Codecs, Containers and Transport.

A codec is a way to encode video or audio data, to make it take less space for sending over the internet. As raw data is just not doable.

Containers are methods to store that encoded data and put it in something that can be send over the internet.

Transport is the method you send it over the internet with.

SLIDE3

Again: Codecs are the method to encode media for storage/transport.

There’s several popular ones, there’s more obviously, but I’ll list some of the popular ones here. For video you have: H264, the current most popular choice. H265, better known as HEVC, which is the up and coming many people are already switching to this. Then there’s VP8/9 which are the ones Google has been working on. Kind of a competitor to HEVC.

They all make video smaller, but have their individual differences.

For audio you have: AAC, the current most popular OPUS, what i personally think is the holy grail of audio codecs, it can do anything. MP3, it’s on the way out, but everyone knows it which is why it’s mentioned Dolby/DTS, popular choices for surround sound, which are not used over the internet often as most computers are not connected to a surround sound installation.

For subtitles you have: Subrip, which is the format usually taken from DVDs WebVTT, is more or less the Apple equivalent for this. There’s more, but there’s so many it’s impossible to list.

There’s upcoming video codecs: AV1, which is basically a mixture of VP10, Daala and Thor, all codecs in development merged together in what should be the holy grail for video codecs. Since they’ve decided to merge projects together it’s unclear how fast development is going. I expect them to be finished 2017 - 2019’ish.

SLIDE4

So how do you pick a codec?

The main reason to pick a codec is convenience. It could be that it’s already encoded in that format or it’s easy to switch to it.

Another big reason is the bitrate, newer codecs generally have a better quality per pit and as internet connections usually have a maximum speed it’s really important to make sure you can send the best quality possible in the least amount of data.

Hardware support is another big reason. Since encoding and decoding is a really processor intensive operation you will want to have hardware acceleration for it. For example watching an H265 HD video would melt any smartphone without hardware support.

Container/transport compatibility, which is really convenient in a way. Some containers or transport can only support a certain set of transport, which means you’re stuck with picking that particular one.

SLIDE5

Which brings us to Containers.

Containers dictate how you can mix codecs together in a single stream or file. Some of the pupular choices are:

MPEG TS, which is often used for traditional broadcast

ISO MP4, I think everyone is familiar with this one.

MKV/WebM, enthusiasts of japanese series usually use this as it has excellent subtitle support

Flash, which i consider a container even though it’s not technically a container. Because FLV and RTMP which are the flash formats have the same limitations from each other and limit what you can pick as well.

(S)RTP, which i consider a container even though it’s technically a transport method because it’s common among different transport methods as well.

SLIDE6

That brings us to transport methods.

These say how codecs inside their container are transported over the internet. This is the main thing that has an impact on what the quality of your delivery will be.

I’ve split this into three different types of streaming.

True streaming protocols, RTSP, RTMP, WebRTC. What these do is what i consider actual streaming. You connect to them over something proprietary because all of these are protocols that are not by default integrated in players of devices yet, WebRTC should be in the future. As a pro they have a really fast start time, really low latency, they’re great for live. However they need a media server or web server extension to work and they usually, though not always, have trouble breaking through firewalls. Technically the best choice for live, but there’s a lot of but’s in there.

Pseudo streaming protocols, this is when you take a media file and you bit for bit deliver it, not all at once, but you stream it to the end delivery point. Doing so gives you the advantage of having low latency and high compatibility (it can pretend to be a file download) on the other hand you still need a media server or web server extension to deliver this format. It’s slightly easier though and there’s no firewall problems.

Segmented HTTP, which is the current dominant way to deliver media. You see all the current buzzwords of HLS, DASH and fragmented MP4 in there. They are a folder of different segments of video files and each of these segments contains a small section of the file. This has a lot of advantages, it’s extremely easy to proxy, you can use a web server for delivery, but they have the really big disadvantage of having a slow start up time and really high latency. For example HLS is in practise between 20 and 40 seconds of delay. Which is unacceptable for some types of media. All Segmented HTTP transport methods have the same kind of delay, some are a little faster, but you’ll never get subsecond with these.

SLIDE7

End of presentation

read moreless
21 Aug 2017

[Blog] An introduction to encoding and pushing with ffmpeg

Hello everyone, Balder here. This time I'd like to talk doing your own encodes and stream pushes. There's a lot of good options out there on the internet, both paid and open source. To cover them would require several posts,...

Hello everyone, Balder here. This time I'd like to talk doing your own encodes and stream pushes. There's a lot of good options out there on the internet, both paid and open source. To cover them would require several posts, so instead I'll just talk about the one I like the most: ffmpeg.

Ffmpeg, the swiss army knife of streaming

Ffmpeg is an incredibly versatile piece of software when it comes to media encoding, pushing and even restreaming other streams. Ffmpeg is purely command line, which is both its downside and strength. It'll allow you to use it in automation on your server, which can be incredibly handy for any starting streaming platform. The downside is that as you'll have to do this without a graphical interface, the learning curve is quite high, and if you don't know what you're doing you can worsen the quality of your video or even make it unplayable. Luckily the basic settings are really forgiving for starting users, and as long as you keep the original source files you're quite safe from mistakes.

So what can ffmpeg do?

Anything related to media really: there's little that ffmpeg can't do. If you have a media file/stream you'll be able to adjust the codecs however you want, change the media format and record it to a file or push it towards a media server. A big benefit to this is that it allows you to bypass known problems/issues that certain protocols or pushing methods have. We use it mostly as an encoder and pushing application ourselves; it's deeply involved in all of our tests.

The basics of ffmpeg

Ffmpeg has too many options to explain easily to new users, so we'll go over the bare minimum required to encode and push streams with ffmpeg. Luckily ffmpeg has a wide community and information on how to do something is easily found through your preferred search engine. Every ffmpeg command will follow the following syntax:

ffmpeg input(s) [codec options] output(s)

Input

Input is given by -i Input.file/stream_url. The input can be a file or a stream, of course you'll have to verify the input you're filling in exists or is a valid url, but that's how far the difficulty goes. A special mention goes to the "-re" option to read a file in real-time: it'll allow you to use a media file as "live” source if you wish to test live streaming without having a true live source available.

Codec

A huge array of options is available here, however we'll only cover two things: Copying codecs and changing codecs to H264 for video and AAC for audio. As H264/AAC are the most common codecs at the moment there's a good chance you already have these in your files, otherwise re-encoding them to H264/AAC with the default settings of ffmpeg will almost certainly give you what you want. If not feel free to check out the ffmpeg documentation here. To specify the video or audio codec, use -c:v for video or -c:a for audio. Lastly, you can use -c to choose all codecs in the file/stream at once.

Copying codecs (options)

The copying of codecs couldn't be easier with ffmpeg. You only need to use -c copy to copy both the video and audio codecs. Copying the codecs allows you to ingest a media file/stream and record it or change it to a different format. You can also only copy a specific codec like this: -c:v copy for video and -c:a copy for audio. The neat thing about using the copy option is that it will not re-encode your media data, making this an extremely fast operation.

Encoding to H264/AAC (options)

Ffmpeg allows you to change the video and audio track separately. You can copy a video track while only editing the audio track if you wish. Copying is always done with the copy codec. An encode to H264/AAC is done by using the option -c:a aac -strict -2 -c:v h264. Older versions of ffmpeg might require the equivalent old syntax -acodec aac -strict -2 -vcodec h264 instead.

Output

The output of ffmpeg can either be a file or a push over a stream url. To record it to a file use outputfile.ext; almost any media type can be chosen by simply using the right extension. To push over a stream use -f FORMAT STREAM_URL. The most commonly used format for live streaming will be RTMP, but RTMP streams internally use the FLV format. That means you'll have to use -f flv rtmp://SERVERIP:PORT/live/STREAMNAME for this. Other stream types may auto-select their format based on the URL, similar to how this works for files.

Examples

  • FLV file Input to MP4 file, copy codecs

ffmpeg -i INPUT.flv -c copy OUTPUT.mp4

In all the examples below we'll assume you do not know the codecs and will want to replace them with H264/AAC.

  • RTMP stream Input to FLV file, reencode

ffmpeg -i rtmp://IP:PORT/live/STREAMNAME -c:a aac -strict -2 -c:v h264 OUTPUT.flv

  • MP4 file Input to RTMP stream, reencode

ffmpeg -re -i INPUT.mp4 -c:a aac -strict -2 -c:v h264 -f flv rtmp://IP:PORT/live/STREAMNAME

  • HLS stream input to RTMP stream, reencode

ffmpeg -i http://IP:PORT/hls/STREAMNAME/index.m3u8 -c:a aac -strict -2 -c:v h264 -f flv rtmp://IP:PORT/live/STREAMNAME

  • MP4 file input to RTSP stream, reencode

ffmpeg -re -i INPUT.mp4 -c:a aac -strict -2 -c:v h264 -f rtsp -rtsp_transport tcp rtsp://IP:PORT/STREAMNAME

  • HLS stream input to RTSP stream, reencode

ffmpeg -i http://IP:PORT/hls/STREAMNAME/index.m3u8 -c:a aac -strict -2 -c:v h264 -f rtsp -rtsp_transport tcp rtsp://IP:PORT/STREAMNAME

  • RTSP stream input over TCP to RTMP stream, copy

*Using ffmpeg to ingest over TCP instead of UDP makes sure you don't have the packet loss problem that UDP has and gives a better and more stable picture for your stream. -rtsp_transport tcp -i CameraURL -c copy -f flv rtmp://IP:PORT/live/STREAMNAME

Creating a multibitrate stream from a single input

This one is a bit advanced, but often asked for so I've opted to include it. In order to fully understand the command you'll need some explanation however. It's important to keep in mind that you will need to tell ffmpeg how many video/audio tracks you want and which track should be used as source for the options later on. First you start with mapping and selecting input tracks then you describe the encoder settings per track.

ffmpeg -i INPUT -map a:0 -map v:0 -map v:0 -c:a:0 copy -c:v:0 copy -c:v:1 h264 -b:v:1 250k -s:v:1 320x240 OUTPUT

To explain all of those options in more detail:

  • INPUT can be either a file or a stream for input
  • OUTPUT can be either a file or a stream for output
  • -map a:0 selects the first available audio track from the source
  • -map v:0 selects the first available video track from the source
  • -map v:0 selects the first available video track from the source a second time
  • -c:a:0 copy tells ffmpeg to copy the audio track without re-encoding it
  • -c:v:0 copy tells ffmpeg to copy the video track without re-encoding it
  • -c:v:1 h264 tells ffmpeg to also re-encode the video track in h264
  • -b:v:1 250k tells ffmpeg that this second video track should be 250kbps
  • -s:v:1 320x240 tells ffmpeg that this second video track should be at 320x240 resolution

You can keep adding video tracks or audio tracks by adding -map v:0 or -map a:0 just be sure to set the encoder options for every track you add. You can select the second input video or audio track with -map v:1 or -map a:1 and so forth for additional tracks.

Well that was it for the basics when using ffmpeg. I hope it helps you when you want to try and get yourself familiarized with it. The next blog will be released in the same month as the IBC, with all the new exciting developments being shown we think it's a good idea to go back to the basics of OTT so we'll be releasing Jaron’s presentation "What is OTT anyway?" given at last year's IBC2016, covering the basics of OTT at a level for people that are just getting started with streaming.

read moreless
4 Aug 2017

[Release] Stable release 2.12 now available!

Hello everyone! Stable release 2.12 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers will receive a new build notification in their e-mail. Here are some highlights: (Better) support for the PCM, Opus, MPEG2...

Hello everyone! Stable release 2.12 of MistServer is now available! The full change log is available here and downloads are here. Our Pro-edition customers will receive a new build notification in their e-mail.

Here are some highlights:

  • (Better) support for the PCM, Opus, MPEG2 (Pro-only) and MP2 (Pro-only) codecs.
  • Raw H.264 Annex B input and output support
  • exec-style inputs (autostarts/stops a given command to retrieve stream data) for TS (Pro-only) and raw H.264
  • Pro only: HEVC support in RTSP input and output
  • Pro only: TS over HTTP input support
  • Subnet mask support for push input whitelisting
  • Improved support for quickly stopping and restarting an incoming push
  • Many other small fixes and improvements!
read moreless
1 Aug 2017

[Blog] Raw H.264 from Raspberry Pi camera to MistServer

Hello everyone, Balder here. As you might know we have native ARMv7 and ARMv8 builds since MistServer 2.11, this allows for an easier install of MistServer on the Raspberry Pi. As a 3d printing enthusiast I use my Raspberry Pi...

Hello everyone, Balder here. As you might know we have native ARMv7 and ARMv8 builds since MistServer 2.11, this allows for an easier install of MistServer on the Raspberry Pi. As a 3d printing enthusiast I use my Raspberry Pi quite a lot to monitor my prints, but I was a bit let down by quality and stability of most solutions as many panics were luckily solved simply by pressing "f5" as it was the camera output, not the print that failed.

I needed a better solution and with the recent native MistServer ARM release we had the perfect excuse to try and do something directly with the Raspberry Pi cam. One of the new features we have in MistServer 2.12 is raw H264 support for all editions of MistServer. Below I will be describing just how I am using this new feature.

Ingesting Raspberry Pi Cam directly into MistServer

Ingesting the RaspiCam is a little different from other inputs, as instead of an address we will be filling in the command to generate video and dump the video to standard output. It will look something like this:

Image of stream configurations within MistServer to use Raspberry Pi camera as input

As you can see our new input is used by using: h264-exec: Which will run the rest of the line as a shell command and ingest its output as raw Annex B H.264. Do note that there is no support for shell escapes whatsoever so if you need to use spaces or other types of escapes inside your arguments run a script instead and put those in the script.

Raspivid is the most direct method of ingesting the Raspberry Pi camera footage in H264 format, powered by the hardware encoder. If your binary is not in your path, you might need to install the binaries or enter the full path. For example in ArchLinuxARM the path is /opt/vc/bin/raspivid instead of just plain raspivid

Recommended settings for ingesting RaspiCam

There’s a few settings we recommend or are downright required in order to ingest the camera correctly.

As you can see from our example we use the following:

h264-exec:raspivid -t 0 -pf high -lev 4.2 -g 10 -ih -qp 35 -o -

Required settings

  • -t 0

    This disables the timeout, to make sure the stream keeps going.

  • -ih

    This will insert the headers inline, which MistServer requires in order to ingest the stream correctly.

  • -o -

    This will output the video to standard output, which MistServer uses to ingest.

Recommended settings

  • -pf high

    This sets the H264 profile to high, we tend to see better qualities with this setting.

  • -lev 4.2

    This sets the H264 profile to 4.2 (our highest option), which tends to be better supported among protocols than our other options

  • -g 10

    This setting makes sure there is a keyframe every 10 seconds, which tends to make the camera more live, you can lower the number for an even more live stream, but bandwidth costs will be raised.

  • -qp 35

    This sets the quality of the stream, we tend to see the best results around 35, but it is a personal setting.

Watching your live stream

If filled in correctly, you should now see your stream show up when you try to watch it. The neat thing about this method is that it will only start ingesting the stream if you try to watch it, so you are not wasting resources for a stream that is not viewed, attempts a restart in the event raspivid crashes and closes automatically when no one is watching it anymore.

Image of Raspberry Pi camera live footage playing

I am pretty happy with my new method to monitor my prints, as it has been more reliable than my older set up of using mjpeg-streamer and using the media server functionality of MistServer allows me to monitor my prints more easily from outside my home network as this method is quite a lot better in terms of bandwidth and quality.

Well that was all for this post, see you all next time!

read moreless
Latest 2017 2016 2015 2014 2013 2012