This is a series of articles. Follow the link here to get an overview over all articles.

Understanding the delay

HLS has a delay by design. In our example we have a typically delay of 20-30 seconds. But why?

We create currently segments with a duration of 4 seconds. The player typically caches 2-4 segments (–> 8-16 seconds delay). Another segment is on delay because the player must reload always the playlist (–> up to 4 seconds). Also the encoder is always processing one segment (another 4 seconds delay). On top of this you have a delay on the RTMP encoding on client side and the upload/transfer to the server (typically 1-4 seconds).

This gives you the delay of up to 30 seconds.

How to reduce the delay

Using smaller segment duration might be an option. But this causes more overhead because more files must be created, the playlist file must be updated more often and also the client must reload the playlist more often. So reduce the segment size only if you need a lesser delay.

Apple recommends a target duration of 6 seconds. uses e.g. also 4 seconds as we in our example.

I personally recommend not using a value lesser then 2 seconds:

-hls_time 2

More information

Apple has also published an article about this topic. They have extended the HLS protocol and published this as a new RFC draft. This requires a lot new features that the server (in our case FFmpeg) must implement (currently, April 2020 doesn’t support this).


Use the 4 seconds of segment size and you have a delay of up to 30 seconds. This is usually no problem at all since platforms like YouTube Live or have the same delay.